00:00:00.000 Started by upstream project "autotest-nightly" build number 4275 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3638 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.835 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.846 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.857 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.857 > git config core.sparsecheckout # timeout=10 00:00:07.867 > git read-tree -mu HEAD # timeout=10 00:00:07.882 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.900 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.900 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.001 [Pipeline] Start of Pipeline 00:00:08.017 [Pipeline] library 00:00:08.019 Loading library shm_lib@master 00:00:09.880 Library shm_lib@master is cached. Copying from home. 00:00:09.961 [Pipeline] node 00:00:10.132 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:10.135 [Pipeline] { 00:00:10.152 [Pipeline] catchError 00:00:10.153 [Pipeline] { 00:00:10.169 [Pipeline] wrap 00:00:10.183 [Pipeline] { 00:00:10.196 [Pipeline] stage 00:00:10.198 [Pipeline] { (Prologue) 00:00:10.498 [Pipeline] sh 00:00:10.776 + logger -p user.info -t JENKINS-CI 00:00:10.794 [Pipeline] echo 00:00:10.796 Node: GP11 00:00:10.804 [Pipeline] sh 00:00:11.104 [Pipeline] setCustomBuildProperty 00:00:11.117 [Pipeline] echo 00:00:11.119 Cleanup processes 00:00:11.125 [Pipeline] sh 00:00:11.411 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.411 2740835 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.426 [Pipeline] sh 00:00:11.714 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.714 ++ grep -v 'sudo pgrep' 00:00:11.714 ++ awk '{print $1}' 00:00:11.714 + sudo kill -9 00:00:11.714 + true 00:00:11.728 [Pipeline] cleanWs 00:00:11.738 [WS-CLEANUP] Deleting project workspace... 00:00:11.738 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.745 [WS-CLEANUP] done 00:00:11.750 [Pipeline] setCustomBuildProperty 00:00:11.766 [Pipeline] sh 00:00:12.047 + sudo git config --global --replace-all safe.directory '*' 00:00:12.108 [Pipeline] httpRequest 00:00:12.931 [Pipeline] echo 00:00:12.933 Sorcerer 10.211.164.20 is alive 00:00:12.942 [Pipeline] retry 00:00:12.944 [Pipeline] { 00:00:12.957 [Pipeline] httpRequest 00:00:12.961 HttpMethod: GET 00:00:12.962 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.962 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.965 Response Code: HTTP/1.1 200 OK 00:00:12.965 Success: Status code 200 is in the accepted range: 200,404 00:00:12.966 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.130 [Pipeline] } 00:00:14.147 [Pipeline] // retry 00:00:14.154 [Pipeline] sh 00:00:14.442 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.454 [Pipeline] httpRequest 00:00:14.815 [Pipeline] echo 00:00:14.817 Sorcerer 10.211.164.20 is alive 00:00:14.824 [Pipeline] retry 00:00:14.826 [Pipeline] { 00:00:14.839 [Pipeline] httpRequest 00:00:14.843 HttpMethod: GET 00:00:14.844 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:14.844 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:14.847 Response Code: HTTP/1.1 200 OK 00:00:14.848 Success: Status code 200 is in the accepted range: 200,404 00:00:14.848 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:37.032 [Pipeline] } 00:00:37.049 [Pipeline] // retry 00:00:37.056 [Pipeline] sh 00:00:37.340 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:40.635 [Pipeline] sh 00:00:40.919 + git -C spdk log --oneline -n5 00:00:40.919 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:40.919 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:40.919 4bcab9fb9 correct kick for CQ full case 00:00:40.919 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:40.919 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:40.930 [Pipeline] } 00:00:40.944 [Pipeline] // stage 00:00:40.952 [Pipeline] stage 00:00:40.955 [Pipeline] { (Prepare) 00:00:40.970 [Pipeline] writeFile 00:00:40.985 [Pipeline] sh 00:00:41.266 + logger -p user.info -t JENKINS-CI 00:00:41.279 [Pipeline] sh 00:00:41.561 + logger -p user.info -t JENKINS-CI 00:00:41.573 [Pipeline] sh 00:00:41.853 + cat autorun-spdk.conf 00:00:41.853 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.854 SPDK_TEST_NVMF=1 00:00:41.854 SPDK_TEST_NVME_CLI=1 00:00:41.854 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.854 SPDK_TEST_NVMF_NICS=e810 00:00:41.854 SPDK_RUN_ASAN=1 00:00:41.854 SPDK_RUN_UBSAN=1 00:00:41.854 NET_TYPE=phy 00:00:41.861 RUN_NIGHTLY=1 00:00:41.865 [Pipeline] readFile 00:00:41.881 [Pipeline] withEnv 00:00:41.882 [Pipeline] { 00:00:41.889 [Pipeline] sh 00:00:42.167 + set -ex 00:00:42.167 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:42.167 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.167 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.167 ++ SPDK_TEST_NVMF=1 00:00:42.167 ++ SPDK_TEST_NVME_CLI=1 00:00:42.167 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.167 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.167 ++ SPDK_RUN_ASAN=1 00:00:42.167 ++ SPDK_RUN_UBSAN=1 00:00:42.167 ++ NET_TYPE=phy 00:00:42.167 ++ RUN_NIGHTLY=1 00:00:42.167 + case $SPDK_TEST_NVMF_NICS in 00:00:42.167 + DRIVERS=ice 00:00:42.167 + [[ tcp == \r\d\m\a ]] 00:00:42.167 + [[ -n ice ]] 00:00:42.167 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:42.167 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:42.167 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:42.167 rmmod: ERROR: Module irdma is not currently loaded 00:00:42.167 rmmod: ERROR: Module i40iw is not currently loaded 00:00:42.167 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:42.167 + true 00:00:42.167 + for D in $DRIVERS 00:00:42.167 + sudo modprobe ice 00:00:42.167 + exit 0 00:00:42.177 [Pipeline] } 00:00:42.190 [Pipeline] // withEnv 00:00:42.195 [Pipeline] } 00:00:42.206 [Pipeline] // stage 00:00:42.215 [Pipeline] catchError 00:00:42.216 [Pipeline] { 00:00:42.227 [Pipeline] timeout 00:00:42.227 Timeout set to expire in 1 hr 0 min 00:00:42.228 [Pipeline] { 00:00:42.240 [Pipeline] stage 00:00:42.242 [Pipeline] { (Tests) 00:00:42.254 [Pipeline] sh 00:00:42.536 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.536 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.536 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.536 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:42.536 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.536 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:42.536 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:42.536 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:42.536 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:42.536 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:42.536 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:42.536 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.536 + source /etc/os-release 00:00:42.536 ++ NAME='Fedora Linux' 00:00:42.536 ++ VERSION='39 (Cloud Edition)' 00:00:42.536 ++ ID=fedora 00:00:42.536 ++ VERSION_ID=39 00:00:42.536 ++ VERSION_CODENAME= 00:00:42.536 ++ PLATFORM_ID=platform:f39 00:00:42.536 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:42.536 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:42.536 ++ LOGO=fedora-logo-icon 00:00:42.536 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:42.536 ++ HOME_URL=https://fedoraproject.org/ 00:00:42.536 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:42.536 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:42.536 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:42.536 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:42.536 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:42.536 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:42.536 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:42.536 ++ SUPPORT_END=2024-11-12 00:00:42.536 ++ VARIANT='Cloud Edition' 00:00:42.536 ++ VARIANT_ID=cloud 00:00:42.536 + uname -a 00:00:42.536 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:42.536 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:43.471 Hugepages 00:00:43.471 node hugesize free / total 00:00:43.471 node0 1048576kB 0 / 0 00:00:43.471 node0 2048kB 0 / 0 00:00:43.471 node1 1048576kB 0 / 0 00:00:43.471 node1 2048kB 0 / 0 00:00:43.471 00:00:43.471 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:43.471 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:43.471 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:43.471 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:43.471 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:43.471 + rm -f /tmp/spdk-ld-path 00:00:43.471 + source autorun-spdk.conf 00:00:43.471 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.471 ++ SPDK_TEST_NVMF=1 00:00:43.471 ++ SPDK_TEST_NVME_CLI=1 00:00:43.471 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.471 ++ SPDK_TEST_NVMF_NICS=e810 00:00:43.471 ++ SPDK_RUN_ASAN=1 00:00:43.471 ++ SPDK_RUN_UBSAN=1 00:00:43.471 ++ NET_TYPE=phy 00:00:43.471 ++ RUN_NIGHTLY=1 00:00:43.471 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:43.471 + [[ -n '' ]] 00:00:43.472 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.472 + for M in /var/spdk/build-*-manifest.txt 00:00:43.472 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:43.472 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:43.472 + for M in /var/spdk/build-*-manifest.txt 00:00:43.472 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:43.472 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:43.730 + for M in /var/spdk/build-*-manifest.txt 00:00:43.730 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:43.730 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:43.730 ++ uname 00:00:43.730 + [[ Linux == \L\i\n\u\x ]] 00:00:43.730 + sudo dmesg -T 00:00:43.730 + sudo dmesg --clear 00:00:43.730 + dmesg_pid=2741510 00:00:43.730 + [[ Fedora Linux == FreeBSD ]] 00:00:43.730 + sudo dmesg -Tw 00:00:43.730 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:43.730 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:43.730 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:43.730 + [[ -x /usr/src/fio-static/fio ]] 00:00:43.730 + export FIO_BIN=/usr/src/fio-static/fio 00:00:43.730 + FIO_BIN=/usr/src/fio-static/fio 00:00:43.730 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:43.730 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:43.730 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:43.730 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:43.730 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:43.730 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:43.730 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:43.730 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:43.730 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:43.730 09:00:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:43.730 09:00:48 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:43.730 09:00:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:00:43.730 09:00:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:43.730 09:00:48 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:43.730 09:00:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:43.730 09:00:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:43.730 09:00:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:43.730 09:00:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:43.730 09:00:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:43.730 09:00:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:43.730 09:00:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.730 09:00:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.730 09:00:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.730 09:00:48 -- paths/export.sh@5 -- $ export PATH 00:00:43.731 09:00:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.731 09:00:48 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:43.731 09:00:48 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:43.731 09:00:48 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731830448.XXXXXX 00:00:43.731 09:00:48 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731830448.1v7Qvd 00:00:43.731 09:00:48 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:43.731 09:00:48 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:43.731 09:00:48 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:43.731 09:00:48 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:43.731 09:00:48 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:43.731 09:00:48 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:43.731 09:00:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:43.731 09:00:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.731 09:00:48 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:43.731 09:00:48 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:43.731 09:00:48 -- pm/common@17 -- $ local monitor 00:00:43.731 09:00:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.731 09:00:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.731 09:00:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.731 09:00:48 -- pm/common@21 -- $ date +%s 00:00:43.731 09:00:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.731 09:00:48 -- pm/common@21 -- $ date +%s 00:00:43.731 09:00:48 -- pm/common@25 -- $ sleep 1 00:00:43.731 09:00:48 -- pm/common@21 -- $ date +%s 00:00:43.731 09:00:48 -- pm/common@21 -- $ date +%s 00:00:43.731 09:00:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731830448 00:00:43.731 09:00:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731830448 00:00:43.731 09:00:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731830448 00:00:43.731 09:00:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731830448 00:00:43.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731830448_collect-cpu-load.pm.log 00:00:43.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731830448_collect-cpu-temp.pm.log 00:00:43.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731830448_collect-vmstat.pm.log 00:00:43.731 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731830448_collect-bmc-pm.bmc.pm.log 00:00:44.672 09:00:49 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:44.672 09:00:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:44.672 09:00:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:44.672 09:00:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:44.672 09:00:49 -- spdk/autobuild.sh@16 -- $ date -u 00:00:44.672 Sun Nov 17 08:00:49 AM UTC 2024 00:00:44.672 09:00:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:44.672 v25.01-pre-189-g83e8405e4 00:00:44.672 09:00:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:44.672 09:00:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:44.672 09:00:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:44.672 09:00:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:44.672 09:00:49 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.672 ************************************ 00:00:44.672 START TEST asan 00:00:44.672 ************************************ 00:00:44.672 09:00:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:00:44.672 using asan 00:00:44.672 00:00:44.672 real 0m0.000s 00:00:44.672 user 0m0.000s 00:00:44.672 sys 0m0.000s 00:00:44.672 09:00:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:44.672 09:00:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:44.672 ************************************ 00:00:44.672 END TEST asan 00:00:44.672 ************************************ 00:00:44.931 09:00:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:44.931 09:00:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:44.931 09:00:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:44.931 09:00:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:44.931 09:00:49 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.931 ************************************ 00:00:44.931 START TEST ubsan 00:00:44.931 ************************************ 00:00:44.931 09:00:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:44.931 using ubsan 00:00:44.931 00:00:44.931 real 0m0.000s 00:00:44.931 user 0m0.000s 00:00:44.931 sys 0m0.000s 00:00:44.931 09:00:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:44.931 09:00:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:44.931 ************************************ 00:00:44.931 END TEST ubsan 00:00:44.931 ************************************ 00:00:44.931 09:00:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:44.931 09:00:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:44.931 09:00:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:44.931 09:00:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:44.931 09:00:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:44.931 09:00:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:44.931 09:00:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:44.931 09:00:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:44.931 09:00:49 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:44.931 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:44.931 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:45.191 Using 'verbs' RDMA provider 00:00:56.111 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.093 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.093 Creating mk/config.mk...done. 00:01:06.093 Creating mk/cc.flags.mk...done. 00:01:06.093 Type 'make' to build. 00:01:06.093 09:01:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:06.093 09:01:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:06.093 09:01:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:06.093 09:01:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.093 ************************************ 00:01:06.093 START TEST make 00:01:06.093 ************************************ 00:01:06.093 09:01:10 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:06.093 make[1]: Nothing to be done for 'all'. 00:01:16.097 The Meson build system 00:01:16.097 Version: 1.5.0 00:01:16.097 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:16.097 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:16.097 Build type: native build 00:01:16.097 Program cat found: YES (/usr/bin/cat) 00:01:16.097 Project name: DPDK 00:01:16.097 Project version: 24.03.0 00:01:16.097 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:16.097 C linker for the host machine: cc ld.bfd 2.40-14 00:01:16.097 Host machine cpu family: x86_64 00:01:16.097 Host machine cpu: x86_64 00:01:16.097 Message: ## Building in Developer Mode ## 00:01:16.097 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:16.097 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:16.097 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:16.097 Program python3 found: YES (/usr/bin/python3) 00:01:16.097 Program cat found: YES (/usr/bin/cat) 00:01:16.097 Compiler for C supports arguments -march=native: YES 00:01:16.097 Checking for size of "void *" : 8 00:01:16.097 Checking for size of "void *" : 8 (cached) 00:01:16.097 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:16.097 Library m found: YES 00:01:16.097 Library numa found: YES 00:01:16.097 Has header "numaif.h" : YES 00:01:16.097 Library fdt found: NO 00:01:16.097 Library execinfo found: NO 00:01:16.097 Has header "execinfo.h" : YES 00:01:16.097 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:16.097 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:16.097 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:16.097 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:16.097 Run-time dependency openssl found: YES 3.1.1 00:01:16.097 Run-time dependency libpcap found: YES 1.10.4 00:01:16.097 Has header "pcap.h" with dependency libpcap: YES 00:01:16.097 Compiler for C supports arguments -Wcast-qual: YES 00:01:16.097 Compiler for C supports arguments -Wdeprecated: YES 00:01:16.097 Compiler for C supports arguments -Wformat: YES 00:01:16.097 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:16.097 Compiler for C supports arguments -Wformat-security: NO 00:01:16.097 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.097 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:16.097 Compiler for C supports arguments -Wnested-externs: YES 00:01:16.097 Compiler for C supports arguments -Wold-style-definition: YES 00:01:16.097 Compiler for C supports arguments -Wpointer-arith: YES 00:01:16.097 Compiler for C supports arguments -Wsign-compare: YES 00:01:16.097 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:16.097 Compiler for C supports arguments -Wundef: YES 00:01:16.097 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.097 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:16.097 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:16.097 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.097 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:16.097 Program objdump found: YES (/usr/bin/objdump) 00:01:16.097 Compiler for C supports arguments -mavx512f: YES 00:01:16.097 Checking if "AVX512 checking" compiles: YES 00:01:16.097 Fetching value of define "__SSE4_2__" : 1 00:01:16.097 Fetching value of define "__AES__" : 1 00:01:16.097 Fetching value of define "__AVX__" : 1 00:01:16.097 Fetching value of define "__AVX2__" : (undefined) 00:01:16.097 Fetching value of define "__AVX512BW__" : (undefined) 00:01:16.097 Fetching value of define "__AVX512CD__" : (undefined) 00:01:16.097 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:16.097 Fetching value of define "__AVX512F__" : (undefined) 00:01:16.097 Fetching value of define "__AVX512VL__" : (undefined) 00:01:16.097 Fetching value of define "__PCLMUL__" : 1 00:01:16.097 Fetching value of define "__RDRND__" : 1 00:01:16.097 Fetching value of define "__RDSEED__" : (undefined) 00:01:16.097 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:16.097 Fetching value of define "__znver1__" : (undefined) 00:01:16.097 Fetching value of define "__znver2__" : (undefined) 00:01:16.097 Fetching value of define "__znver3__" : (undefined) 00:01:16.097 Fetching value of define "__znver4__" : (undefined) 00:01:16.097 Library asan found: YES 00:01:16.097 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:16.097 Message: lib/log: Defining dependency "log" 00:01:16.097 Message: lib/kvargs: Defining dependency "kvargs" 00:01:16.097 Message: lib/telemetry: Defining dependency "telemetry" 00:01:16.097 Library rt found: YES 00:01:16.097 Checking for function "getentropy" : NO 00:01:16.097 Message: lib/eal: Defining dependency "eal" 00:01:16.097 Message: lib/ring: Defining dependency "ring" 00:01:16.097 Message: lib/rcu: Defining dependency "rcu" 00:01:16.097 Message: lib/mempool: Defining dependency "mempool" 00:01:16.097 Message: lib/mbuf: Defining dependency "mbuf" 00:01:16.097 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:16.097 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:16.097 Compiler for C supports arguments -mpclmul: YES 00:01:16.097 Compiler for C supports arguments -maes: YES 00:01:16.098 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:16.098 Compiler for C supports arguments -mavx512bw: YES 00:01:16.098 Compiler for C supports arguments -mavx512dq: YES 00:01:16.098 Compiler for C supports arguments -mavx512vl: YES 00:01:16.098 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:16.098 Compiler for C supports arguments -mavx2: YES 00:01:16.098 Compiler for C supports arguments -mavx: YES 00:01:16.098 Message: lib/net: Defining dependency "net" 00:01:16.098 Message: lib/meter: Defining dependency "meter" 00:01:16.098 Message: lib/ethdev: Defining dependency "ethdev" 00:01:16.098 Message: lib/pci: Defining dependency "pci" 00:01:16.098 Message: lib/cmdline: Defining dependency "cmdline" 00:01:16.098 Message: lib/hash: Defining dependency "hash" 00:01:16.098 Message: lib/timer: Defining dependency "timer" 00:01:16.098 Message: lib/compressdev: Defining dependency "compressdev" 00:01:16.098 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:16.098 Message: lib/dmadev: Defining dependency "dmadev" 00:01:16.098 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:16.098 Message: lib/power: Defining dependency "power" 00:01:16.098 Message: lib/reorder: Defining dependency "reorder" 00:01:16.098 Message: lib/security: Defining dependency "security" 00:01:16.098 Has header "linux/userfaultfd.h" : YES 00:01:16.098 Has header "linux/vduse.h" : YES 00:01:16.098 Message: lib/vhost: Defining dependency "vhost" 00:01:16.098 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:16.098 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:16.098 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:16.098 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:16.098 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:16.098 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:16.098 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:16.098 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:16.098 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:16.098 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:16.098 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:16.098 Configuring doxy-api-html.conf using configuration 00:01:16.098 Configuring doxy-api-man.conf using configuration 00:01:16.098 Program mandb found: YES (/usr/bin/mandb) 00:01:16.098 Program sphinx-build found: NO 00:01:16.098 Configuring rte_build_config.h using configuration 00:01:16.098 Message: 00:01:16.098 ================= 00:01:16.098 Applications Enabled 00:01:16.098 ================= 00:01:16.098 00:01:16.098 apps: 00:01:16.098 00:01:16.098 00:01:16.098 Message: 00:01:16.098 ================= 00:01:16.098 Libraries Enabled 00:01:16.098 ================= 00:01:16.098 00:01:16.098 libs: 00:01:16.098 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:16.098 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:16.098 cryptodev, dmadev, power, reorder, security, vhost, 00:01:16.098 00:01:16.098 Message: 00:01:16.098 =============== 00:01:16.098 Drivers Enabled 00:01:16.098 =============== 00:01:16.098 00:01:16.098 common: 00:01:16.098 00:01:16.098 bus: 00:01:16.098 pci, vdev, 00:01:16.098 mempool: 00:01:16.098 ring, 00:01:16.098 dma: 00:01:16.098 00:01:16.098 net: 00:01:16.098 00:01:16.098 crypto: 00:01:16.098 00:01:16.098 compress: 00:01:16.098 00:01:16.098 vdpa: 00:01:16.098 00:01:16.098 00:01:16.098 Message: 00:01:16.098 ================= 00:01:16.098 Content Skipped 00:01:16.098 ================= 00:01:16.098 00:01:16.098 apps: 00:01:16.098 dumpcap: explicitly disabled via build config 00:01:16.098 graph: explicitly disabled via build config 00:01:16.098 pdump: explicitly disabled via build config 00:01:16.098 proc-info: explicitly disabled via build config 00:01:16.098 test-acl: explicitly disabled via build config 00:01:16.098 test-bbdev: explicitly disabled via build config 00:01:16.098 test-cmdline: explicitly disabled via build config 00:01:16.098 test-compress-perf: explicitly disabled via build config 00:01:16.098 test-crypto-perf: explicitly disabled via build config 00:01:16.098 test-dma-perf: explicitly disabled via build config 00:01:16.098 test-eventdev: explicitly disabled via build config 00:01:16.098 test-fib: explicitly disabled via build config 00:01:16.098 test-flow-perf: explicitly disabled via build config 00:01:16.098 test-gpudev: explicitly disabled via build config 00:01:16.098 test-mldev: explicitly disabled via build config 00:01:16.098 test-pipeline: explicitly disabled via build config 00:01:16.098 test-pmd: explicitly disabled via build config 00:01:16.098 test-regex: explicitly disabled via build config 00:01:16.098 test-sad: explicitly disabled via build config 00:01:16.098 test-security-perf: explicitly disabled via build config 00:01:16.098 00:01:16.098 libs: 00:01:16.098 argparse: explicitly disabled via build config 00:01:16.098 metrics: explicitly disabled via build config 00:01:16.098 acl: explicitly disabled via build config 00:01:16.098 bbdev: explicitly disabled via build config 00:01:16.098 bitratestats: explicitly disabled via build config 00:01:16.098 bpf: explicitly disabled via build config 00:01:16.098 cfgfile: explicitly disabled via build config 00:01:16.098 distributor: explicitly disabled via build config 00:01:16.098 efd: explicitly disabled via build config 00:01:16.098 eventdev: explicitly disabled via build config 00:01:16.098 dispatcher: explicitly disabled via build config 00:01:16.098 gpudev: explicitly disabled via build config 00:01:16.098 gro: explicitly disabled via build config 00:01:16.098 gso: explicitly disabled via build config 00:01:16.098 ip_frag: explicitly disabled via build config 00:01:16.098 jobstats: explicitly disabled via build config 00:01:16.098 latencystats: explicitly disabled via build config 00:01:16.098 lpm: explicitly disabled via build config 00:01:16.098 member: explicitly disabled via build config 00:01:16.098 pcapng: explicitly disabled via build config 00:01:16.098 rawdev: explicitly disabled via build config 00:01:16.098 regexdev: explicitly disabled via build config 00:01:16.098 mldev: explicitly disabled via build config 00:01:16.098 rib: explicitly disabled via build config 00:01:16.098 sched: explicitly disabled via build config 00:01:16.098 stack: explicitly disabled via build config 00:01:16.098 ipsec: explicitly disabled via build config 00:01:16.098 pdcp: explicitly disabled via build config 00:01:16.098 fib: explicitly disabled via build config 00:01:16.098 port: explicitly disabled via build config 00:01:16.098 pdump: explicitly disabled via build config 00:01:16.098 table: explicitly disabled via build config 00:01:16.098 pipeline: explicitly disabled via build config 00:01:16.098 graph: explicitly disabled via build config 00:01:16.098 node: explicitly disabled via build config 00:01:16.098 00:01:16.098 drivers: 00:01:16.098 common/cpt: not in enabled drivers build config 00:01:16.098 common/dpaax: not in enabled drivers build config 00:01:16.098 common/iavf: not in enabled drivers build config 00:01:16.098 common/idpf: not in enabled drivers build config 00:01:16.098 common/ionic: not in enabled drivers build config 00:01:16.098 common/mvep: not in enabled drivers build config 00:01:16.098 common/octeontx: not in enabled drivers build config 00:01:16.098 bus/auxiliary: not in enabled drivers build config 00:01:16.098 bus/cdx: not in enabled drivers build config 00:01:16.098 bus/dpaa: not in enabled drivers build config 00:01:16.098 bus/fslmc: not in enabled drivers build config 00:01:16.098 bus/ifpga: not in enabled drivers build config 00:01:16.098 bus/platform: not in enabled drivers build config 00:01:16.098 bus/uacce: not in enabled drivers build config 00:01:16.098 bus/vmbus: not in enabled drivers build config 00:01:16.098 common/cnxk: not in enabled drivers build config 00:01:16.098 common/mlx5: not in enabled drivers build config 00:01:16.098 common/nfp: not in enabled drivers build config 00:01:16.098 common/nitrox: not in enabled drivers build config 00:01:16.098 common/qat: not in enabled drivers build config 00:01:16.098 common/sfc_efx: not in enabled drivers build config 00:01:16.098 mempool/bucket: not in enabled drivers build config 00:01:16.098 mempool/cnxk: not in enabled drivers build config 00:01:16.098 mempool/dpaa: not in enabled drivers build config 00:01:16.098 mempool/dpaa2: not in enabled drivers build config 00:01:16.098 mempool/octeontx: not in enabled drivers build config 00:01:16.098 mempool/stack: not in enabled drivers build config 00:01:16.098 dma/cnxk: not in enabled drivers build config 00:01:16.098 dma/dpaa: not in enabled drivers build config 00:01:16.098 dma/dpaa2: not in enabled drivers build config 00:01:16.098 dma/hisilicon: not in enabled drivers build config 00:01:16.098 dma/idxd: not in enabled drivers build config 00:01:16.098 dma/ioat: not in enabled drivers build config 00:01:16.098 dma/skeleton: not in enabled drivers build config 00:01:16.098 net/af_packet: not in enabled drivers build config 00:01:16.098 net/af_xdp: not in enabled drivers build config 00:01:16.098 net/ark: not in enabled drivers build config 00:01:16.098 net/atlantic: not in enabled drivers build config 00:01:16.098 net/avp: not in enabled drivers build config 00:01:16.098 net/axgbe: not in enabled drivers build config 00:01:16.098 net/bnx2x: not in enabled drivers build config 00:01:16.098 net/bnxt: not in enabled drivers build config 00:01:16.098 net/bonding: not in enabled drivers build config 00:01:16.098 net/cnxk: not in enabled drivers build config 00:01:16.098 net/cpfl: not in enabled drivers build config 00:01:16.098 net/cxgbe: not in enabled drivers build config 00:01:16.098 net/dpaa: not in enabled drivers build config 00:01:16.098 net/dpaa2: not in enabled drivers build config 00:01:16.098 net/e1000: not in enabled drivers build config 00:01:16.098 net/ena: not in enabled drivers build config 00:01:16.098 net/enetc: not in enabled drivers build config 00:01:16.098 net/enetfec: not in enabled drivers build config 00:01:16.098 net/enic: not in enabled drivers build config 00:01:16.099 net/failsafe: not in enabled drivers build config 00:01:16.099 net/fm10k: not in enabled drivers build config 00:01:16.099 net/gve: not in enabled drivers build config 00:01:16.099 net/hinic: not in enabled drivers build config 00:01:16.099 net/hns3: not in enabled drivers build config 00:01:16.099 net/i40e: not in enabled drivers build config 00:01:16.099 net/iavf: not in enabled drivers build config 00:01:16.099 net/ice: not in enabled drivers build config 00:01:16.099 net/idpf: not in enabled drivers build config 00:01:16.099 net/igc: not in enabled drivers build config 00:01:16.099 net/ionic: not in enabled drivers build config 00:01:16.099 net/ipn3ke: not in enabled drivers build config 00:01:16.099 net/ixgbe: not in enabled drivers build config 00:01:16.099 net/mana: not in enabled drivers build config 00:01:16.099 net/memif: not in enabled drivers build config 00:01:16.099 net/mlx4: not in enabled drivers build config 00:01:16.099 net/mlx5: not in enabled drivers build config 00:01:16.099 net/mvneta: not in enabled drivers build config 00:01:16.099 net/mvpp2: not in enabled drivers build config 00:01:16.099 net/netvsc: not in enabled drivers build config 00:01:16.099 net/nfb: not in enabled drivers build config 00:01:16.099 net/nfp: not in enabled drivers build config 00:01:16.099 net/ngbe: not in enabled drivers build config 00:01:16.099 net/null: not in enabled drivers build config 00:01:16.099 net/octeontx: not in enabled drivers build config 00:01:16.099 net/octeon_ep: not in enabled drivers build config 00:01:16.099 net/pcap: not in enabled drivers build config 00:01:16.099 net/pfe: not in enabled drivers build config 00:01:16.099 net/qede: not in enabled drivers build config 00:01:16.099 net/ring: not in enabled drivers build config 00:01:16.099 net/sfc: not in enabled drivers build config 00:01:16.099 net/softnic: not in enabled drivers build config 00:01:16.099 net/tap: not in enabled drivers build config 00:01:16.099 net/thunderx: not in enabled drivers build config 00:01:16.099 net/txgbe: not in enabled drivers build config 00:01:16.099 net/vdev_netvsc: not in enabled drivers build config 00:01:16.099 net/vhost: not in enabled drivers build config 00:01:16.099 net/virtio: not in enabled drivers build config 00:01:16.099 net/vmxnet3: not in enabled drivers build config 00:01:16.099 raw/*: missing internal dependency, "rawdev" 00:01:16.099 crypto/armv8: not in enabled drivers build config 00:01:16.099 crypto/bcmfs: not in enabled drivers build config 00:01:16.099 crypto/caam_jr: not in enabled drivers build config 00:01:16.099 crypto/ccp: not in enabled drivers build config 00:01:16.099 crypto/cnxk: not in enabled drivers build config 00:01:16.099 crypto/dpaa_sec: not in enabled drivers build config 00:01:16.099 crypto/dpaa2_sec: not in enabled drivers build config 00:01:16.099 crypto/ipsec_mb: not in enabled drivers build config 00:01:16.099 crypto/mlx5: not in enabled drivers build config 00:01:16.099 crypto/mvsam: not in enabled drivers build config 00:01:16.099 crypto/nitrox: not in enabled drivers build config 00:01:16.099 crypto/null: not in enabled drivers build config 00:01:16.099 crypto/octeontx: not in enabled drivers build config 00:01:16.099 crypto/openssl: not in enabled drivers build config 00:01:16.099 crypto/scheduler: not in enabled drivers build config 00:01:16.099 crypto/uadk: not in enabled drivers build config 00:01:16.099 crypto/virtio: not in enabled drivers build config 00:01:16.099 compress/isal: not in enabled drivers build config 00:01:16.099 compress/mlx5: not in enabled drivers build config 00:01:16.099 compress/nitrox: not in enabled drivers build config 00:01:16.099 compress/octeontx: not in enabled drivers build config 00:01:16.099 compress/zlib: not in enabled drivers build config 00:01:16.099 regex/*: missing internal dependency, "regexdev" 00:01:16.099 ml/*: missing internal dependency, "mldev" 00:01:16.099 vdpa/ifc: not in enabled drivers build config 00:01:16.099 vdpa/mlx5: not in enabled drivers build config 00:01:16.099 vdpa/nfp: not in enabled drivers build config 00:01:16.099 vdpa/sfc: not in enabled drivers build config 00:01:16.099 event/*: missing internal dependency, "eventdev" 00:01:16.099 baseband/*: missing internal dependency, "bbdev" 00:01:16.099 gpu/*: missing internal dependency, "gpudev" 00:01:16.099 00:01:16.099 00:01:16.099 Build targets in project: 85 00:01:16.099 00:01:16.099 DPDK 24.03.0 00:01:16.099 00:01:16.099 User defined options 00:01:16.099 buildtype : debug 00:01:16.099 default_library : shared 00:01:16.099 libdir : lib 00:01:16.099 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.099 b_sanitize : address 00:01:16.099 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:16.099 c_link_args : 00:01:16.099 cpu_instruction_set: native 00:01:16.099 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:01:16.099 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:01:16.099 enable_docs : false 00:01:16.099 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:16.099 enable_kmods : false 00:01:16.099 max_lcores : 128 00:01:16.099 tests : false 00:01:16.099 00:01:16.099 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.099 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:16.099 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:16.099 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:16.099 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:16.099 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:16.099 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:16.099 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:16.099 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:16.099 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:16.099 [9/268] Linking static target lib/librte_kvargs.a 00:01:16.099 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:16.099 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:16.099 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:16.099 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:16.099 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:16.099 [15/268] Linking static target lib/librte_log.a 00:01:16.099 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:16.669 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.669 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:16.669 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:16.669 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:16.669 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:16.669 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:16.669 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:16.669 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:16.669 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:16.929 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:16.929 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:16.929 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:16.929 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:16.929 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:16.929 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:16.929 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:16.929 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:16.929 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:16.929 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:16.929 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:16.929 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:16.929 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:16.929 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:16.929 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:16.929 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:16.929 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:16.929 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:16.929 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:16.929 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:16.929 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:16.929 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:16.929 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:16.929 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:16.929 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:16.929 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:16.929 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:16.929 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:16.929 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:16.929 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:16.929 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:16.929 [57/268] Linking static target lib/librte_telemetry.a 00:01:16.929 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:17.190 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:17.190 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:17.190 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:17.190 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:17.190 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:17.190 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.450 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:17.450 [66/268] Linking target lib/librte_log.so.24.1 00:01:17.711 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:17.711 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.711 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:17.711 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:17.711 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:17.711 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:17.711 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:17.711 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:17.711 [75/268] Linking static target lib/librte_pci.a 00:01:17.711 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:17.711 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:17.975 [78/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:17.975 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:17.975 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:17.975 [81/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:17.975 [82/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:17.975 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:17.975 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:17.975 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:17.975 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:17.975 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:17.975 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:17.975 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:17.975 [90/268] Linking static target lib/librte_ring.a 00:01:17.975 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:17.975 [92/268] Linking target lib/librte_kvargs.so.24.1 00:01:17.975 [93/268] Linking static target lib/librte_meter.a 00:01:17.975 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:17.975 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:17.975 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:17.975 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:17.975 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:17.975 [99/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.975 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:17.975 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:17.975 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:17.975 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:17.975 [104/268] Linking target lib/librte_telemetry.so.24.1 00:01:18.238 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:18.238 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:18.238 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:18.238 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:18.238 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:18.238 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:18.238 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:18.238 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:18.238 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:18.238 [114/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:18.238 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:18.238 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:18.238 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:18.238 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:18.238 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.238 [120/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:18.238 [121/268] Linking static target lib/librte_mempool.a 00:01:18.540 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:18.540 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:18.540 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:18.540 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:18.540 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:18.540 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:18.540 [128/268] Linking static target lib/librte_rcu.a 00:01:18.540 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:18.540 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:18.540 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.540 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:18.852 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:18.852 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.852 [135/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:18.852 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:18.852 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:18.852 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:18.852 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:18.852 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:18.852 [141/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:18.852 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:18.852 [143/268] Linking static target lib/librte_cmdline.a 00:01:18.852 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:19.112 [145/268] Linking static target lib/librte_eal.a 00:01:19.112 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:19.112 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:19.112 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:19.112 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:19.112 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:19.112 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:19.112 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:19.112 [153/268] Linking static target lib/librte_timer.a 00:01:19.112 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.112 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:19.112 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:19.372 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:19.372 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:19.372 [159/268] Linking static target lib/librte_dmadev.a 00:01:19.372 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.631 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:19.631 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.631 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:19.631 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:19.631 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:19.631 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:19.631 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:19.890 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:19.890 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:19.890 [170/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:19.890 [171/268] Linking static target lib/librte_net.a 00:01:19.890 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:19.890 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:19.890 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:19.890 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:19.890 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:19.890 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:19.890 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.890 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.890 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:19.890 [181/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:19.890 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:19.890 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:19.890 [184/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:20.149 [185/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.149 [186/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:20.149 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:20.149 [188/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:20.149 [189/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:20.149 [190/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.149 [191/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.149 [192/268] Linking static target lib/librte_power.a 00:01:20.149 [193/268] Linking static target drivers/librte_bus_pci.a 00:01:20.149 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:20.149 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:20.149 [196/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:20.149 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.149 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.149 [199/268] Linking static target lib/librte_compressdev.a 00:01:20.149 [200/268] Linking static target drivers/librte_bus_vdev.a 00:01:20.149 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:20.408 [202/268] Linking static target lib/librte_hash.a 00:01:20.408 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:20.408 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.408 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.408 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:20.408 [207/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.408 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:20.666 [209/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.666 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.666 [211/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:20.666 [212/268] Linking static target lib/librte_reorder.a 00:01:20.666 [213/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.925 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.925 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.925 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:20.925 [217/268] Linking static target lib/librte_security.a 00:01:21.491 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.750 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:22.685 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.685 [221/268] Linking static target lib/librte_mbuf.a 00:01:22.685 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:22.685 [223/268] Linking static target lib/librte_cryptodev.a 00:01:22.944 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.509 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:23.509 [226/268] Linking static target lib/librte_ethdev.a 00:01:23.767 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.141 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.141 [229/268] Linking target lib/librte_eal.so.24.1 00:01:25.399 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:25.399 [231/268] Linking target lib/librte_meter.so.24.1 00:01:25.399 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:25.399 [233/268] Linking target lib/librte_dmadev.so.24.1 00:01:25.399 [234/268] Linking target lib/librte_pci.so.24.1 00:01:25.399 [235/268] Linking target lib/librte_ring.so.24.1 00:01:25.399 [236/268] Linking target lib/librte_timer.so.24.1 00:01:25.399 [237/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:25.399 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:25.399 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:25.399 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:25.399 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:25.399 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:25.399 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:25.399 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:25.657 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:25.657 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:25.657 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:25.657 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:25.916 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:25.916 [250/268] Linking target lib/librte_reorder.so.24.1 00:01:25.916 [251/268] Linking target lib/librte_compressdev.so.24.1 00:01:25.916 [252/268] Linking target lib/librte_net.so.24.1 00:01:25.916 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:25.916 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:25.916 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:26.174 [256/268] Linking target lib/librte_cmdline.so.24.1 00:01:26.174 [257/268] Linking target lib/librte_hash.so.24.1 00:01:26.174 [258/268] Linking target lib/librte_security.so.24.1 00:01:26.174 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:26.741 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:28.115 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.115 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:28.115 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:28.115 [264/268] Linking target lib/librte_power.so.24.1 00:01:54.648 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.648 [266/268] Linking static target lib/librte_vhost.a 00:01:54.648 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.648 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:54.648 INFO: autodetecting backend as ninja 00:01:54.648 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:54.648 CC lib/ut/ut.o 00:01:54.648 CC lib/ut_mock/mock.o 00:01:54.648 CC lib/log/log.o 00:01:54.648 CC lib/log/log_flags.o 00:01:54.648 CC lib/log/log_deprecated.o 00:01:54.648 LIB libspdk_ut.a 00:01:54.648 LIB libspdk_ut_mock.a 00:01:54.648 LIB libspdk_log.a 00:01:54.648 SO libspdk_ut.so.2.0 00:01:54.648 SO libspdk_ut_mock.so.6.0 00:01:54.648 SO libspdk_log.so.7.1 00:01:54.648 SYMLINK libspdk_ut.so 00:01:54.648 SYMLINK libspdk_ut_mock.so 00:01:54.648 SYMLINK libspdk_log.so 00:01:54.907 CC lib/dma/dma.o 00:01:54.907 CC lib/ioat/ioat.o 00:01:54.907 CXX lib/trace_parser/trace.o 00:01:54.907 CC lib/util/base64.o 00:01:54.907 CC lib/util/bit_array.o 00:01:54.907 CC lib/util/cpuset.o 00:01:54.907 CC lib/util/crc16.o 00:01:54.907 CC lib/util/crc32.o 00:01:54.907 CC lib/util/crc32c.o 00:01:54.907 CC lib/util/crc32_ieee.o 00:01:54.907 CC lib/util/crc64.o 00:01:54.907 CC lib/util/dif.o 00:01:54.907 CC lib/util/fd.o 00:01:54.907 CC lib/util/fd_group.o 00:01:54.907 CC lib/util/file.o 00:01:54.907 CC lib/util/hexlify.o 00:01:54.907 CC lib/util/iov.o 00:01:54.907 CC lib/util/math.o 00:01:54.907 CC lib/util/net.o 00:01:54.907 CC lib/util/pipe.o 00:01:54.907 CC lib/util/strerror_tls.o 00:01:54.907 CC lib/util/string.o 00:01:54.907 CC lib/util/uuid.o 00:01:54.907 CC lib/util/xor.o 00:01:54.907 CC lib/util/zipf.o 00:01:54.907 CC lib/util/md5.o 00:01:54.907 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.907 CC lib/vfio_user/host/vfio_user.o 00:01:55.165 LIB libspdk_dma.a 00:01:55.165 SO libspdk_dma.so.5.0 00:01:55.165 SYMLINK libspdk_dma.so 00:01:55.165 LIB libspdk_ioat.a 00:01:55.424 SO libspdk_ioat.so.7.0 00:01:55.424 LIB libspdk_vfio_user.a 00:01:55.424 SYMLINK libspdk_ioat.so 00:01:55.424 SO libspdk_vfio_user.so.5.0 00:01:55.424 SYMLINK libspdk_vfio_user.so 00:01:55.682 LIB libspdk_util.a 00:01:55.682 SO libspdk_util.so.10.1 00:01:55.940 SYMLINK libspdk_util.so 00:01:55.940 CC lib/idxd/idxd.o 00:01:55.940 CC lib/conf/conf.o 00:01:55.940 CC lib/json/json_parse.o 00:01:55.940 CC lib/rdma_utils/rdma_utils.o 00:01:55.940 CC lib/vmd/vmd.o 00:01:55.940 CC lib/idxd/idxd_user.o 00:01:55.940 CC lib/json/json_util.o 00:01:55.940 CC lib/env_dpdk/env.o 00:01:55.940 CC lib/vmd/led.o 00:01:55.940 CC lib/idxd/idxd_kernel.o 00:01:55.940 CC lib/env_dpdk/memory.o 00:01:55.940 CC lib/json/json_write.o 00:01:55.940 CC lib/env_dpdk/pci.o 00:01:55.940 CC lib/env_dpdk/init.o 00:01:55.940 CC lib/env_dpdk/threads.o 00:01:55.940 CC lib/env_dpdk/pci_ioat.o 00:01:55.940 CC lib/env_dpdk/pci_virtio.o 00:01:55.940 CC lib/env_dpdk/pci_vmd.o 00:01:55.940 CC lib/env_dpdk/pci_idxd.o 00:01:55.940 CC lib/env_dpdk/pci_event.o 00:01:55.940 CC lib/env_dpdk/sigbus_handler.o 00:01:55.940 CC lib/env_dpdk/pci_dpdk.o 00:01:55.940 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.940 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.198 LIB libspdk_trace_parser.a 00:01:56.198 SO libspdk_trace_parser.so.6.0 00:01:56.198 SYMLINK libspdk_trace_parser.so 00:01:56.456 LIB libspdk_conf.a 00:01:56.456 SO libspdk_conf.so.6.0 00:01:56.456 LIB libspdk_rdma_utils.a 00:01:56.456 LIB libspdk_json.a 00:01:56.456 SO libspdk_rdma_utils.so.1.0 00:01:56.456 SYMLINK libspdk_conf.so 00:01:56.456 SO libspdk_json.so.6.0 00:01:56.456 SYMLINK libspdk_rdma_utils.so 00:01:56.456 SYMLINK libspdk_json.so 00:01:56.715 CC lib/rdma_provider/common.o 00:01:56.715 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.715 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.715 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.715 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.715 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.973 LIB libspdk_idxd.a 00:01:56.973 LIB libspdk_rdma_provider.a 00:01:56.973 SO libspdk_idxd.so.12.1 00:01:56.973 SO libspdk_rdma_provider.so.7.0 00:01:56.973 LIB libspdk_vmd.a 00:01:56.973 SO libspdk_vmd.so.6.0 00:01:56.973 SYMLINK libspdk_idxd.so 00:01:56.973 SYMLINK libspdk_rdma_provider.so 00:01:56.973 LIB libspdk_jsonrpc.a 00:01:56.973 SO libspdk_jsonrpc.so.6.0 00:01:56.973 SYMLINK libspdk_vmd.so 00:01:56.973 SYMLINK libspdk_jsonrpc.so 00:01:57.232 CC lib/rpc/rpc.o 00:01:57.491 LIB libspdk_rpc.a 00:01:57.491 SO libspdk_rpc.so.6.0 00:01:57.491 SYMLINK libspdk_rpc.so 00:01:57.749 CC lib/trace/trace.o 00:01:57.749 CC lib/notify/notify.o 00:01:57.749 CC lib/trace/trace_flags.o 00:01:57.749 CC lib/notify/notify_rpc.o 00:01:57.749 CC lib/trace/trace_rpc.o 00:01:57.749 CC lib/keyring/keyring.o 00:01:57.749 CC lib/keyring/keyring_rpc.o 00:01:58.008 LIB libspdk_notify.a 00:01:58.008 SO libspdk_notify.so.6.0 00:01:58.008 SYMLINK libspdk_notify.so 00:01:58.008 LIB libspdk_keyring.a 00:01:58.008 SO libspdk_keyring.so.2.0 00:01:58.008 LIB libspdk_trace.a 00:01:58.008 SO libspdk_trace.so.11.0 00:01:58.008 SYMLINK libspdk_keyring.so 00:01:58.008 SYMLINK libspdk_trace.so 00:01:58.266 CC lib/sock/sock.o 00:01:58.266 CC lib/sock/sock_rpc.o 00:01:58.266 CC lib/thread/thread.o 00:01:58.266 CC lib/thread/iobuf.o 00:01:58.834 LIB libspdk_sock.a 00:01:58.834 SO libspdk_sock.so.10.0 00:01:58.834 SYMLINK libspdk_sock.so 00:01:59.092 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.092 CC lib/nvme/nvme_ctrlr.o 00:01:59.092 CC lib/nvme/nvme_fabric.o 00:01:59.092 CC lib/nvme/nvme_ns_cmd.o 00:01:59.092 CC lib/nvme/nvme_ns.o 00:01:59.092 CC lib/nvme/nvme_pcie_common.o 00:01:59.092 CC lib/nvme/nvme_pcie.o 00:01:59.092 LIB libspdk_env_dpdk.a 00:01:59.092 CC lib/nvme/nvme_qpair.o 00:01:59.093 CC lib/nvme/nvme.o 00:01:59.093 CC lib/nvme/nvme_quirks.o 00:01:59.093 CC lib/nvme/nvme_transport.o 00:01:59.093 CC lib/nvme/nvme_discovery.o 00:01:59.093 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.093 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.093 CC lib/nvme/nvme_tcp.o 00:01:59.093 CC lib/nvme/nvme_opal.o 00:01:59.093 CC lib/nvme/nvme_io_msg.o 00:01:59.093 CC lib/nvme/nvme_poll_group.o 00:01:59.093 CC lib/nvme/nvme_zns.o 00:01:59.093 CC lib/nvme/nvme_auth.o 00:01:59.093 CC lib/nvme/nvme_stubs.o 00:01:59.093 CC lib/nvme/nvme_rdma.o 00:01:59.093 CC lib/nvme/nvme_cuse.o 00:01:59.093 SO libspdk_env_dpdk.so.15.1 00:01:59.351 SYMLINK libspdk_env_dpdk.so 00:02:00.286 LIB libspdk_thread.a 00:02:00.286 SO libspdk_thread.so.11.0 00:02:00.544 SYMLINK libspdk_thread.so 00:02:00.544 CC lib/init/json_config.o 00:02:00.544 CC lib/blob/blobstore.o 00:02:00.544 CC lib/fsdev/fsdev.o 00:02:00.544 CC lib/virtio/virtio.o 00:02:00.544 CC lib/accel/accel.o 00:02:00.544 CC lib/init/subsystem.o 00:02:00.544 CC lib/blob/request.o 00:02:00.544 CC lib/virtio/virtio_vhost_user.o 00:02:00.544 CC lib/fsdev/fsdev_io.o 00:02:00.544 CC lib/blob/zeroes.o 00:02:00.544 CC lib/init/subsystem_rpc.o 00:02:00.544 CC lib/virtio/virtio_vfio_user.o 00:02:00.544 CC lib/accel/accel_rpc.o 00:02:00.544 CC lib/init/rpc.o 00:02:00.544 CC lib/blob/blob_bs_dev.o 00:02:00.544 CC lib/fsdev/fsdev_rpc.o 00:02:00.544 CC lib/virtio/virtio_pci.o 00:02:00.544 CC lib/accel/accel_sw.o 00:02:01.110 LIB libspdk_init.a 00:02:01.110 SO libspdk_init.so.6.0 00:02:01.110 SYMLINK libspdk_init.so 00:02:01.110 LIB libspdk_virtio.a 00:02:01.110 SO libspdk_virtio.so.7.0 00:02:01.110 SYMLINK libspdk_virtio.so 00:02:01.110 CC lib/event/app.o 00:02:01.110 CC lib/event/reactor.o 00:02:01.110 CC lib/event/log_rpc.o 00:02:01.110 CC lib/event/app_rpc.o 00:02:01.110 CC lib/event/scheduler_static.o 00:02:01.368 LIB libspdk_fsdev.a 00:02:01.626 SO libspdk_fsdev.so.2.0 00:02:01.626 SYMLINK libspdk_fsdev.so 00:02:01.626 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:01.887 LIB libspdk_event.a 00:02:01.887 SO libspdk_event.so.14.0 00:02:01.887 SYMLINK libspdk_event.so 00:02:02.147 LIB libspdk_nvme.a 00:02:02.147 LIB libspdk_accel.a 00:02:02.147 SO libspdk_accel.so.16.0 00:02:02.147 SYMLINK libspdk_accel.so 00:02:02.147 SO libspdk_nvme.so.15.0 00:02:02.406 CC lib/bdev/bdev.o 00:02:02.406 CC lib/bdev/bdev_rpc.o 00:02:02.406 CC lib/bdev/bdev_zone.o 00:02:02.406 CC lib/bdev/part.o 00:02:02.406 CC lib/bdev/scsi_nvme.o 00:02:02.406 SYMLINK libspdk_nvme.so 00:02:02.664 LIB libspdk_fuse_dispatcher.a 00:02:02.664 SO libspdk_fuse_dispatcher.so.1.0 00:02:02.664 SYMLINK libspdk_fuse_dispatcher.so 00:02:05.948 LIB libspdk_blob.a 00:02:05.948 SO libspdk_blob.so.11.0 00:02:05.948 SYMLINK libspdk_blob.so 00:02:05.948 CC lib/blobfs/blobfs.o 00:02:05.948 CC lib/blobfs/tree.o 00:02:05.948 CC lib/lvol/lvol.o 00:02:05.948 LIB libspdk_bdev.a 00:02:05.948 SO libspdk_bdev.so.17.0 00:02:05.948 SYMLINK libspdk_bdev.so 00:02:05.948 CC lib/ftl/ftl_core.o 00:02:05.948 CC lib/ublk/ublk.o 00:02:05.948 CC lib/nvmf/ctrlr.o 00:02:05.948 CC lib/ftl/ftl_init.o 00:02:05.948 CC lib/nvmf/ctrlr_discovery.o 00:02:05.948 CC lib/ublk/ublk_rpc.o 00:02:05.948 CC lib/ftl/ftl_layout.o 00:02:05.948 CC lib/nvmf/ctrlr_bdev.o 00:02:05.948 CC lib/ftl/ftl_debug.o 00:02:05.948 CC lib/nbd/nbd.o 00:02:05.948 CC lib/nvmf/subsystem.o 00:02:05.948 CC lib/ftl/ftl_io.o 00:02:05.948 CC lib/scsi/dev.o 00:02:05.948 CC lib/ftl/ftl_sb.o 00:02:05.948 CC lib/nvmf/nvmf.o 00:02:05.948 CC lib/nvmf/nvmf_rpc.o 00:02:05.948 CC lib/scsi/lun.o 00:02:05.948 CC lib/nbd/nbd_rpc.o 00:02:05.948 CC lib/ftl/ftl_l2p.o 00:02:05.948 CC lib/ftl/ftl_l2p_flat.o 00:02:05.948 CC lib/ftl/ftl_nv_cache.o 00:02:05.948 CC lib/scsi/port.o 00:02:05.948 CC lib/nvmf/transport.o 00:02:05.948 CC lib/nvmf/tcp.o 00:02:05.948 CC lib/scsi/scsi.o 00:02:05.948 CC lib/scsi/scsi_bdev.o 00:02:05.948 CC lib/ftl/ftl_band.o 00:02:05.948 CC lib/nvmf/stubs.o 00:02:05.948 CC lib/scsi/scsi_pr.o 00:02:05.948 CC lib/nvmf/mdns_server.o 00:02:05.948 CC lib/ftl/ftl_band_ops.o 00:02:05.948 CC lib/nvmf/rdma.o 00:02:05.948 CC lib/scsi/scsi_rpc.o 00:02:05.948 CC lib/ftl/ftl_writer.o 00:02:05.948 CC lib/ftl/ftl_rq.o 00:02:05.949 CC lib/nvmf/auth.o 00:02:05.949 CC lib/scsi/task.o 00:02:05.949 CC lib/ftl/ftl_reloc.o 00:02:05.949 CC lib/ftl/ftl_l2p_cache.o 00:02:05.949 CC lib/ftl/ftl_p2l.o 00:02:05.949 CC lib/ftl/ftl_p2l_log.o 00:02:05.949 CC lib/ftl/mngt/ftl_mngt.o 00:02:05.949 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:05.949 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:05.949 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:05.949 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:06.528 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:06.528 CC lib/ftl/utils/ftl_conf.o 00:02:06.528 CC lib/ftl/utils/ftl_md.o 00:02:06.528 CC lib/ftl/utils/ftl_mempool.o 00:02:06.528 CC lib/ftl/utils/ftl_bitmap.o 00:02:06.528 CC lib/ftl/utils/ftl_property.o 00:02:06.528 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:06.528 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:06.528 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:06.788 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:06.788 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:06.788 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:06.788 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:06.788 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:06.788 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:06.788 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:06.788 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:06.788 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:06.788 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:06.788 CC lib/ftl/base/ftl_base_dev.o 00:02:07.046 CC lib/ftl/base/ftl_base_bdev.o 00:02:07.046 CC lib/ftl/ftl_trace.o 00:02:07.046 LIB libspdk_nbd.a 00:02:07.046 SO libspdk_nbd.so.7.0 00:02:07.304 LIB libspdk_scsi.a 00:02:07.304 SYMLINK libspdk_nbd.so 00:02:07.304 SO libspdk_scsi.so.9.0 00:02:07.304 LIB libspdk_blobfs.a 00:02:07.304 SYMLINK libspdk_scsi.so 00:02:07.304 LIB libspdk_ublk.a 00:02:07.304 SO libspdk_blobfs.so.10.0 00:02:07.304 SO libspdk_ublk.so.3.0 00:02:07.562 SYMLINK libspdk_blobfs.so 00:02:07.562 SYMLINK libspdk_ublk.so 00:02:07.562 CC lib/vhost/vhost.o 00:02:07.562 CC lib/iscsi/conn.o 00:02:07.562 CC lib/vhost/vhost_rpc.o 00:02:07.562 CC lib/vhost/vhost_scsi.o 00:02:07.562 CC lib/iscsi/init_grp.o 00:02:07.562 CC lib/iscsi/iscsi.o 00:02:07.562 CC lib/vhost/vhost_blk.o 00:02:07.562 CC lib/iscsi/param.o 00:02:07.562 CC lib/vhost/rte_vhost_user.o 00:02:07.562 CC lib/iscsi/portal_grp.o 00:02:07.562 CC lib/iscsi/tgt_node.o 00:02:07.562 CC lib/iscsi/iscsi_subsystem.o 00:02:07.562 CC lib/iscsi/iscsi_rpc.o 00:02:07.562 CC lib/iscsi/task.o 00:02:07.562 LIB libspdk_lvol.a 00:02:07.562 SO libspdk_lvol.so.10.0 00:02:07.820 SYMLINK libspdk_lvol.so 00:02:08.078 LIB libspdk_ftl.a 00:02:08.078 SO libspdk_ftl.so.9.0 00:02:08.644 SYMLINK libspdk_ftl.so 00:02:08.902 LIB libspdk_vhost.a 00:02:08.902 SO libspdk_vhost.so.8.0 00:02:09.160 SYMLINK libspdk_vhost.so 00:02:09.419 LIB libspdk_iscsi.a 00:02:09.419 SO libspdk_iscsi.so.8.0 00:02:09.678 SYMLINK libspdk_iscsi.so 00:02:09.678 LIB libspdk_nvmf.a 00:02:09.678 SO libspdk_nvmf.so.20.0 00:02:09.937 SYMLINK libspdk_nvmf.so 00:02:10.196 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.196 CC module/fsdev/aio/fsdev_aio.o 00:02:10.196 CC module/keyring/file/keyring.o 00:02:10.196 CC module/accel/error/accel_error.o 00:02:10.196 CC module/keyring/linux/keyring.o 00:02:10.196 CC module/accel/ioat/accel_ioat.o 00:02:10.196 CC module/sock/posix/posix.o 00:02:10.196 CC module/accel/iaa/accel_iaa.o 00:02:10.196 CC module/accel/dsa/accel_dsa.o 00:02:10.196 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:10.196 CC module/accel/error/accel_error_rpc.o 00:02:10.196 CC module/accel/ioat/accel_ioat_rpc.o 00:02:10.196 CC module/keyring/file/keyring_rpc.o 00:02:10.196 CC module/fsdev/aio/linux_aio_mgr.o 00:02:10.196 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.196 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.196 CC module/keyring/linux/keyring_rpc.o 00:02:10.196 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.196 CC module/blob/bdev/blob_bdev.o 00:02:10.196 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:10.196 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.454 LIB libspdk_env_dpdk_rpc.a 00:02:10.455 SO libspdk_env_dpdk_rpc.so.6.0 00:02:10.455 SYMLINK libspdk_env_dpdk_rpc.so 00:02:10.455 LIB libspdk_keyring_file.a 00:02:10.455 LIB libspdk_keyring_linux.a 00:02:10.455 LIB libspdk_scheduler_gscheduler.a 00:02:10.455 LIB libspdk_scheduler_dpdk_governor.a 00:02:10.455 SO libspdk_keyring_linux.so.1.0 00:02:10.455 SO libspdk_keyring_file.so.2.0 00:02:10.455 SO libspdk_scheduler_gscheduler.so.4.0 00:02:10.455 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:10.455 LIB libspdk_accel_ioat.a 00:02:10.455 LIB libspdk_scheduler_dynamic.a 00:02:10.728 SYMLINK libspdk_keyring_file.so 00:02:10.728 SYMLINK libspdk_keyring_linux.so 00:02:10.728 SO libspdk_accel_ioat.so.6.0 00:02:10.728 SYMLINK libspdk_scheduler_gscheduler.so 00:02:10.728 SO libspdk_scheduler_dynamic.so.4.0 00:02:10.728 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:10.728 LIB libspdk_accel_error.a 00:02:10.728 LIB libspdk_accel_iaa.a 00:02:10.728 SO libspdk_accel_error.so.2.0 00:02:10.728 SO libspdk_accel_iaa.so.3.0 00:02:10.728 SYMLINK libspdk_scheduler_dynamic.so 00:02:10.728 SYMLINK libspdk_accel_ioat.so 00:02:10.728 SYMLINK libspdk_accel_error.so 00:02:10.728 LIB libspdk_blob_bdev.a 00:02:10.728 SYMLINK libspdk_accel_iaa.so 00:02:10.728 LIB libspdk_accel_dsa.a 00:02:10.728 SO libspdk_blob_bdev.so.11.0 00:02:10.728 SO libspdk_accel_dsa.so.5.0 00:02:10.728 SYMLINK libspdk_blob_bdev.so 00:02:10.728 SYMLINK libspdk_accel_dsa.so 00:02:11.029 CC module/bdev/delay/vbdev_delay.o 00:02:11.029 CC module/bdev/malloc/bdev_malloc.o 00:02:11.029 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.029 CC module/bdev/error/vbdev_error.o 00:02:11.029 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.029 CC module/bdev/nvme/bdev_nvme.o 00:02:11.029 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.029 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.029 CC module/bdev/null/bdev_null.o 00:02:11.029 CC module/bdev/nvme/nvme_rpc.o 00:02:11.029 CC module/bdev/null/bdev_null_rpc.o 00:02:11.029 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.029 CC module/bdev/gpt/gpt.o 00:02:11.029 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.029 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.029 CC module/bdev/nvme/vbdev_opal.o 00:02:11.029 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.029 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.029 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.029 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.029 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.029 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.029 CC module/bdev/split/vbdev_split.o 00:02:11.029 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.029 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.029 CC module/bdev/raid/bdev_raid.o 00:02:11.029 CC module/bdev/aio/bdev_aio.o 00:02:11.029 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.029 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.029 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.029 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.029 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.029 CC module/bdev/raid/raid0.o 00:02:11.029 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.029 CC module/bdev/raid/raid1.o 00:02:11.029 CC module/bdev/ftl/bdev_ftl.o 00:02:11.029 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.029 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.029 CC module/bdev/raid/concat.o 00:02:11.029 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.029 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.029 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.343 LIB libspdk_sock_posix.a 00:02:11.343 LIB libspdk_blobfs_bdev.a 00:02:11.343 SO libspdk_sock_posix.so.6.0 00:02:11.602 SO libspdk_blobfs_bdev.so.6.0 00:02:11.602 LIB libspdk_fsdev_aio.a 00:02:11.602 SYMLINK libspdk_sock_posix.so 00:02:11.602 SO libspdk_fsdev_aio.so.1.0 00:02:11.602 LIB libspdk_bdev_split.a 00:02:11.602 SYMLINK libspdk_blobfs_bdev.so 00:02:11.602 LIB libspdk_bdev_null.a 00:02:11.602 SO libspdk_bdev_null.so.6.0 00:02:11.602 SO libspdk_bdev_split.so.6.0 00:02:11.602 SYMLINK libspdk_fsdev_aio.so 00:02:11.602 LIB libspdk_bdev_ftl.a 00:02:11.602 LIB libspdk_bdev_error.a 00:02:11.602 LIB libspdk_bdev_passthru.a 00:02:11.602 SYMLINK libspdk_bdev_null.so 00:02:11.602 SYMLINK libspdk_bdev_split.so 00:02:11.602 LIB libspdk_bdev_gpt.a 00:02:11.602 SO libspdk_bdev_error.so.6.0 00:02:11.602 SO libspdk_bdev_ftl.so.6.0 00:02:11.602 LIB libspdk_bdev_zone_block.a 00:02:11.602 SO libspdk_bdev_passthru.so.6.0 00:02:11.602 LIB libspdk_bdev_aio.a 00:02:11.602 SO libspdk_bdev_gpt.so.6.0 00:02:11.602 SO libspdk_bdev_zone_block.so.6.0 00:02:11.602 SO libspdk_bdev_aio.so.6.0 00:02:11.602 SYMLINK libspdk_bdev_error.so 00:02:11.602 SYMLINK libspdk_bdev_ftl.so 00:02:11.602 LIB libspdk_bdev_delay.a 00:02:11.602 SYMLINK libspdk_bdev_passthru.so 00:02:11.602 SYMLINK libspdk_bdev_gpt.so 00:02:11.602 LIB libspdk_bdev_malloc.a 00:02:11.602 SYMLINK libspdk_bdev_zone_block.so 00:02:11.602 SO libspdk_bdev_delay.so.6.0 00:02:11.860 SYMLINK libspdk_bdev_aio.so 00:02:11.860 SO libspdk_bdev_malloc.so.6.0 00:02:11.860 LIB libspdk_bdev_iscsi.a 00:02:11.860 LIB libspdk_bdev_lvol.a 00:02:11.860 SYMLINK libspdk_bdev_delay.so 00:02:11.860 SO libspdk_bdev_iscsi.so.6.0 00:02:11.860 SO libspdk_bdev_lvol.so.6.0 00:02:11.860 SYMLINK libspdk_bdev_malloc.so 00:02:11.860 SYMLINK libspdk_bdev_iscsi.so 00:02:11.860 SYMLINK libspdk_bdev_lvol.so 00:02:11.860 LIB libspdk_bdev_virtio.a 00:02:11.860 SO libspdk_bdev_virtio.so.6.0 00:02:12.118 SYMLINK libspdk_bdev_virtio.so 00:02:12.684 LIB libspdk_bdev_raid.a 00:02:12.684 SO libspdk_bdev_raid.so.6.0 00:02:12.684 SYMLINK libspdk_bdev_raid.so 00:02:14.582 LIB libspdk_bdev_nvme.a 00:02:14.582 SO libspdk_bdev_nvme.so.7.1 00:02:14.840 SYMLINK libspdk_bdev_nvme.so 00:02:15.098 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.098 CC module/event/subsystems/vmd/vmd.o 00:02:15.098 CC module/event/subsystems/sock/sock.o 00:02:15.098 CC module/event/subsystems/fsdev/fsdev.o 00:02:15.098 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.098 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.098 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.098 CC module/event/subsystems/keyring/keyring.o 00:02:15.098 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.356 LIB libspdk_event_keyring.a 00:02:15.356 LIB libspdk_event_vhost_blk.a 00:02:15.356 LIB libspdk_event_fsdev.a 00:02:15.356 LIB libspdk_event_scheduler.a 00:02:15.356 LIB libspdk_event_vmd.a 00:02:15.356 LIB libspdk_event_sock.a 00:02:15.356 SO libspdk_event_keyring.so.1.0 00:02:15.356 SO libspdk_event_vhost_blk.so.3.0 00:02:15.356 SO libspdk_event_fsdev.so.1.0 00:02:15.356 SO libspdk_event_scheduler.so.4.0 00:02:15.356 SO libspdk_event_sock.so.5.0 00:02:15.356 SO libspdk_event_vmd.so.6.0 00:02:15.356 LIB libspdk_event_iobuf.a 00:02:15.356 SO libspdk_event_iobuf.so.3.0 00:02:15.356 SYMLINK libspdk_event_keyring.so 00:02:15.356 SYMLINK libspdk_event_vhost_blk.so 00:02:15.356 SYMLINK libspdk_event_fsdev.so 00:02:15.356 SYMLINK libspdk_event_scheduler.so 00:02:15.356 SYMLINK libspdk_event_sock.so 00:02:15.356 SYMLINK libspdk_event_vmd.so 00:02:15.356 SYMLINK libspdk_event_iobuf.so 00:02:15.614 CC module/event/subsystems/accel/accel.o 00:02:15.614 LIB libspdk_event_accel.a 00:02:15.614 SO libspdk_event_accel.so.6.0 00:02:15.873 SYMLINK libspdk_event_accel.so 00:02:15.873 CC module/event/subsystems/bdev/bdev.o 00:02:16.132 LIB libspdk_event_bdev.a 00:02:16.132 SO libspdk_event_bdev.so.6.0 00:02:16.132 SYMLINK libspdk_event_bdev.so 00:02:16.390 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.390 CC module/event/subsystems/nbd/nbd.o 00:02:16.390 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.390 CC module/event/subsystems/ublk/ublk.o 00:02:16.390 CC module/event/subsystems/scsi/scsi.o 00:02:16.390 LIB libspdk_event_ublk.a 00:02:16.390 LIB libspdk_event_nbd.a 00:02:16.648 LIB libspdk_event_scsi.a 00:02:16.649 SO libspdk_event_ublk.so.3.0 00:02:16.649 SO libspdk_event_nbd.so.6.0 00:02:16.649 SO libspdk_event_scsi.so.6.0 00:02:16.649 SYMLINK libspdk_event_ublk.so 00:02:16.649 SYMLINK libspdk_event_nbd.so 00:02:16.649 SYMLINK libspdk_event_scsi.so 00:02:16.649 LIB libspdk_event_nvmf.a 00:02:16.649 SO libspdk_event_nvmf.so.6.0 00:02:16.649 SYMLINK libspdk_event_nvmf.so 00:02:16.649 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.907 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.907 LIB libspdk_event_vhost_scsi.a 00:02:16.907 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.907 LIB libspdk_event_iscsi.a 00:02:16.907 SO libspdk_event_iscsi.so.6.0 00:02:16.907 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.907 SYMLINK libspdk_event_iscsi.so 00:02:17.169 SO libspdk.so.6.0 00:02:17.169 SYMLINK libspdk.so 00:02:17.433 CXX app/trace/trace.o 00:02:17.433 CC app/trace_record/trace_record.o 00:02:17.433 CC app/spdk_top/spdk_top.o 00:02:17.433 CC test/rpc_client/rpc_client_test.o 00:02:17.433 CC app/spdk_nvme_identify/identify.o 00:02:17.433 CC app/spdk_lspci/spdk_lspci.o 00:02:17.433 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.433 CC app/spdk_nvme_perf/perf.o 00:02:17.433 TEST_HEADER include/spdk/accel.h 00:02:17.433 TEST_HEADER include/spdk/accel_module.h 00:02:17.433 TEST_HEADER include/spdk/assert.h 00:02:17.433 TEST_HEADER include/spdk/barrier.h 00:02:17.433 TEST_HEADER include/spdk/base64.h 00:02:17.433 TEST_HEADER include/spdk/bdev.h 00:02:17.433 TEST_HEADER include/spdk/bdev_module.h 00:02:17.433 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.433 TEST_HEADER include/spdk/bit_array.h 00:02:17.433 TEST_HEADER include/spdk/bit_pool.h 00:02:17.433 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.433 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.433 TEST_HEADER include/spdk/blobfs.h 00:02:17.433 TEST_HEADER include/spdk/blob.h 00:02:17.433 TEST_HEADER include/spdk/conf.h 00:02:17.433 TEST_HEADER include/spdk/config.h 00:02:17.433 TEST_HEADER include/spdk/cpuset.h 00:02:17.433 TEST_HEADER include/spdk/crc16.h 00:02:17.433 TEST_HEADER include/spdk/crc32.h 00:02:17.433 TEST_HEADER include/spdk/crc64.h 00:02:17.433 TEST_HEADER include/spdk/dif.h 00:02:17.433 TEST_HEADER include/spdk/dma.h 00:02:17.433 TEST_HEADER include/spdk/endian.h 00:02:17.433 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.433 TEST_HEADER include/spdk/env.h 00:02:17.433 TEST_HEADER include/spdk/event.h 00:02:17.433 TEST_HEADER include/spdk/fd_group.h 00:02:17.433 TEST_HEADER include/spdk/fd.h 00:02:17.433 TEST_HEADER include/spdk/file.h 00:02:17.433 TEST_HEADER include/spdk/fsdev.h 00:02:17.433 TEST_HEADER include/spdk/fsdev_module.h 00:02:17.433 TEST_HEADER include/spdk/ftl.h 00:02:17.433 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:17.433 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.433 TEST_HEADER include/spdk/hexlify.h 00:02:17.433 TEST_HEADER include/spdk/histogram_data.h 00:02:17.433 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.433 TEST_HEADER include/spdk/idxd.h 00:02:17.433 TEST_HEADER include/spdk/init.h 00:02:17.433 TEST_HEADER include/spdk/ioat.h 00:02:17.433 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.433 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.434 TEST_HEADER include/spdk/json.h 00:02:17.434 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.434 TEST_HEADER include/spdk/keyring.h 00:02:17.434 TEST_HEADER include/spdk/keyring_module.h 00:02:17.434 TEST_HEADER include/spdk/likely.h 00:02:17.434 TEST_HEADER include/spdk/log.h 00:02:17.434 TEST_HEADER include/spdk/lvol.h 00:02:17.434 TEST_HEADER include/spdk/md5.h 00:02:17.434 TEST_HEADER include/spdk/memory.h 00:02:17.434 TEST_HEADER include/spdk/mmio.h 00:02:17.434 TEST_HEADER include/spdk/nbd.h 00:02:17.434 TEST_HEADER include/spdk/net.h 00:02:17.434 TEST_HEADER include/spdk/notify.h 00:02:17.434 TEST_HEADER include/spdk/nvme.h 00:02:17.434 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.434 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.434 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.434 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.434 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.434 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.434 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.434 TEST_HEADER include/spdk/nvmf.h 00:02:17.434 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.434 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.434 TEST_HEADER include/spdk/opal.h 00:02:17.434 TEST_HEADER include/spdk/opal_spec.h 00:02:17.434 TEST_HEADER include/spdk/pci_ids.h 00:02:17.434 TEST_HEADER include/spdk/pipe.h 00:02:17.434 TEST_HEADER include/spdk/queue.h 00:02:17.434 TEST_HEADER include/spdk/reduce.h 00:02:17.434 TEST_HEADER include/spdk/scheduler.h 00:02:17.434 TEST_HEADER include/spdk/rpc.h 00:02:17.434 TEST_HEADER include/spdk/scsi.h 00:02:17.434 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.434 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.434 TEST_HEADER include/spdk/sock.h 00:02:17.434 CC app/spdk_dd/spdk_dd.o 00:02:17.434 TEST_HEADER include/spdk/stdinc.h 00:02:17.434 TEST_HEADER include/spdk/string.h 00:02:17.434 TEST_HEADER include/spdk/thread.h 00:02:17.434 TEST_HEADER include/spdk/trace.h 00:02:17.434 TEST_HEADER include/spdk/trace_parser.h 00:02:17.434 TEST_HEADER include/spdk/tree.h 00:02:17.434 TEST_HEADER include/spdk/ublk.h 00:02:17.434 TEST_HEADER include/spdk/util.h 00:02:17.434 TEST_HEADER include/spdk/uuid.h 00:02:17.434 TEST_HEADER include/spdk/version.h 00:02:17.434 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.434 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.434 TEST_HEADER include/spdk/vhost.h 00:02:17.434 TEST_HEADER include/spdk/vmd.h 00:02:17.434 TEST_HEADER include/spdk/xor.h 00:02:17.434 TEST_HEADER include/spdk/zipf.h 00:02:17.434 CXX test/cpp_headers/accel.o 00:02:17.434 CXX test/cpp_headers/assert.o 00:02:17.434 CXX test/cpp_headers/accel_module.o 00:02:17.434 CXX test/cpp_headers/barrier.o 00:02:17.434 CXX test/cpp_headers/base64.o 00:02:17.434 CXX test/cpp_headers/bdev.o 00:02:17.434 CXX test/cpp_headers/bdev_zone.o 00:02:17.434 CXX test/cpp_headers/bdev_module.o 00:02:17.434 CXX test/cpp_headers/bit_array.o 00:02:17.434 CXX test/cpp_headers/blob_bdev.o 00:02:17.434 CXX test/cpp_headers/bit_pool.o 00:02:17.434 CC app/nvmf_tgt/nvmf_main.o 00:02:17.434 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.434 CXX test/cpp_headers/blobfs.o 00:02:17.434 CXX test/cpp_headers/blob.o 00:02:17.434 CXX test/cpp_headers/conf.o 00:02:17.434 CXX test/cpp_headers/config.o 00:02:17.434 CXX test/cpp_headers/cpuset.o 00:02:17.434 CXX test/cpp_headers/crc16.o 00:02:17.434 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.434 CC app/spdk_tgt/spdk_tgt.o 00:02:17.434 CC examples/ioat/verify/verify.o 00:02:17.434 CXX test/cpp_headers/crc32.o 00:02:17.434 CC test/app/jsoncat/jsoncat.o 00:02:17.434 CC test/app/histogram_perf/histogram_perf.o 00:02:17.434 CC test/thread/poller_perf/poller_perf.o 00:02:17.434 CC examples/ioat/perf/perf.o 00:02:17.434 CC test/env/vtophys/vtophys.o 00:02:17.434 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.434 CC test/app/stub/stub.o 00:02:17.434 CC test/env/pci/pci_ut.o 00:02:17.434 CC examples/util/zipf/zipf.o 00:02:17.434 CC test/env/memory/memory_ut.o 00:02:17.434 CC app/fio/nvme/fio_plugin.o 00:02:17.434 CC test/dma/test_dma/test_dma.o 00:02:17.695 CC app/fio/bdev/fio_plugin.o 00:02:17.695 CC test/app/bdev_svc/bdev_svc.o 00:02:17.695 LINK spdk_lspci 00:02:17.695 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.695 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.695 LINK rpc_client_test 00:02:17.695 LINK spdk_nvme_discover 00:02:17.695 LINK jsoncat 00:02:17.695 LINK interrupt_tgt 00:02:17.958 LINK histogram_perf 00:02:17.958 LINK nvmf_tgt 00:02:17.958 LINK poller_perf 00:02:17.958 LINK vtophys 00:02:17.959 CXX test/cpp_headers/crc64.o 00:02:17.959 LINK env_dpdk_post_init 00:02:17.959 CXX test/cpp_headers/dif.o 00:02:17.959 LINK zipf 00:02:17.959 CXX test/cpp_headers/dma.o 00:02:17.959 CXX test/cpp_headers/endian.o 00:02:17.959 CXX test/cpp_headers/env_dpdk.o 00:02:17.959 CXX test/cpp_headers/env.o 00:02:17.959 CXX test/cpp_headers/event.o 00:02:17.959 CXX test/cpp_headers/fd_group.o 00:02:17.959 CXX test/cpp_headers/fd.o 00:02:17.959 LINK iscsi_tgt 00:02:17.959 CXX test/cpp_headers/file.o 00:02:17.959 LINK spdk_tgt 00:02:17.959 CXX test/cpp_headers/fsdev.o 00:02:17.959 LINK spdk_trace_record 00:02:17.959 CXX test/cpp_headers/fsdev_module.o 00:02:17.959 LINK stub 00:02:17.959 CXX test/cpp_headers/ftl.o 00:02:17.959 CXX test/cpp_headers/fuse_dispatcher.o 00:02:17.959 CXX test/cpp_headers/gpt_spec.o 00:02:17.959 CXX test/cpp_headers/hexlify.o 00:02:17.959 CXX test/cpp_headers/histogram_data.o 00:02:17.959 LINK bdev_svc 00:02:17.959 LINK verify 00:02:17.959 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.959 LINK ioat_perf 00:02:17.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.959 CXX test/cpp_headers/idxd.o 00:02:18.219 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.219 CXX test/cpp_headers/idxd_spec.o 00:02:18.219 CXX test/cpp_headers/init.o 00:02:18.219 CXX test/cpp_headers/ioat.o 00:02:18.219 CXX test/cpp_headers/ioat_spec.o 00:02:18.219 CXX test/cpp_headers/iscsi_spec.o 00:02:18.219 CXX test/cpp_headers/json.o 00:02:18.219 LINK spdk_dd 00:02:18.219 CXX test/cpp_headers/jsonrpc.o 00:02:18.219 CXX test/cpp_headers/keyring.o 00:02:18.219 CXX test/cpp_headers/keyring_module.o 00:02:18.219 CXX test/cpp_headers/likely.o 00:02:18.219 CXX test/cpp_headers/log.o 00:02:18.219 CXX test/cpp_headers/lvol.o 00:02:18.219 CXX test/cpp_headers/md5.o 00:02:18.219 CXX test/cpp_headers/memory.o 00:02:18.219 CXX test/cpp_headers/mmio.o 00:02:18.219 CXX test/cpp_headers/nbd.o 00:02:18.219 CXX test/cpp_headers/net.o 00:02:18.486 CXX test/cpp_headers/notify.o 00:02:18.486 LINK spdk_trace 00:02:18.486 CXX test/cpp_headers/nvme.o 00:02:18.486 CXX test/cpp_headers/nvme_intel.o 00:02:18.486 CXX test/cpp_headers/nvme_ocssd.o 00:02:18.486 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:18.486 CXX test/cpp_headers/nvme_spec.o 00:02:18.486 CXX test/cpp_headers/nvme_zns.o 00:02:18.486 CXX test/cpp_headers/nvmf_cmd.o 00:02:18.486 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:18.486 LINK pci_ut 00:02:18.486 CC test/event/reactor/reactor.o 00:02:18.486 CC test/event/reactor_perf/reactor_perf.o 00:02:18.486 CXX test/cpp_headers/nvmf.o 00:02:18.486 CC test/event/event_perf/event_perf.o 00:02:18.486 CC test/event/app_repeat/app_repeat.o 00:02:18.486 CXX test/cpp_headers/nvmf_spec.o 00:02:18.486 CXX test/cpp_headers/nvmf_transport.o 00:02:18.486 CXX test/cpp_headers/opal.o 00:02:18.486 CC examples/sock/hello_world/hello_sock.o 00:02:18.486 CXX test/cpp_headers/opal_spec.o 00:02:18.749 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.749 CC examples/idxd/perf/perf.o 00:02:18.749 CC test/event/scheduler/scheduler.o 00:02:18.749 CXX test/cpp_headers/pci_ids.o 00:02:18.749 CC examples/vmd/led/led.o 00:02:18.749 CC examples/thread/thread/thread_ex.o 00:02:18.749 CXX test/cpp_headers/pipe.o 00:02:18.749 CXX test/cpp_headers/queue.o 00:02:18.749 LINK test_dma 00:02:18.749 CXX test/cpp_headers/reduce.o 00:02:18.749 CXX test/cpp_headers/rpc.o 00:02:18.749 LINK spdk_bdev 00:02:18.749 LINK nvme_fuzz 00:02:18.749 CXX test/cpp_headers/scheduler.o 00:02:18.749 CXX test/cpp_headers/scsi.o 00:02:18.749 CXX test/cpp_headers/scsi_spec.o 00:02:18.749 CXX test/cpp_headers/sock.o 00:02:18.749 CXX test/cpp_headers/stdinc.o 00:02:18.749 CXX test/cpp_headers/string.o 00:02:18.749 CXX test/cpp_headers/thread.o 00:02:18.749 CXX test/cpp_headers/trace.o 00:02:18.749 LINK reactor 00:02:18.749 CXX test/cpp_headers/trace_parser.o 00:02:18.749 LINK reactor_perf 00:02:18.749 CXX test/cpp_headers/tree.o 00:02:18.749 CXX test/cpp_headers/ublk.o 00:02:18.749 LINK event_perf 00:02:18.749 CXX test/cpp_headers/util.o 00:02:19.008 LINK mem_callbacks 00:02:19.008 LINK app_repeat 00:02:19.008 CXX test/cpp_headers/uuid.o 00:02:19.008 LINK spdk_nvme 00:02:19.008 CXX test/cpp_headers/version.o 00:02:19.008 CXX test/cpp_headers/vfio_user_pci.o 00:02:19.008 LINK lsvmd 00:02:19.008 CXX test/cpp_headers/vfio_user_spec.o 00:02:19.008 CC app/vhost/vhost.o 00:02:19.008 CXX test/cpp_headers/vhost.o 00:02:19.008 CXX test/cpp_headers/vmd.o 00:02:19.008 CXX test/cpp_headers/xor.o 00:02:19.008 CXX test/cpp_headers/zipf.o 00:02:19.008 LINK led 00:02:19.008 LINK vhost_fuzz 00:02:19.267 LINK scheduler 00:02:19.267 LINK thread 00:02:19.267 LINK hello_sock 00:02:19.267 LINK vhost 00:02:19.267 LINK spdk_nvme_perf 00:02:19.267 CC test/nvme/sgl/sgl.o 00:02:19.267 CC test/nvme/reserve/reserve.o 00:02:19.267 CC test/nvme/aer/aer.o 00:02:19.267 CC test/nvme/overhead/overhead.o 00:02:19.267 CC test/nvme/connect_stress/connect_stress.o 00:02:19.267 CC test/nvme/reset/reset.o 00:02:19.267 CC test/nvme/e2edp/nvme_dp.o 00:02:19.267 CC test/nvme/simple_copy/simple_copy.o 00:02:19.267 CC test/nvme/fdp/fdp.o 00:02:19.267 CC test/nvme/compliance/nvme_compliance.o 00:02:19.267 CC test/nvme/err_injection/err_injection.o 00:02:19.267 CC test/nvme/cuse/cuse.o 00:02:19.267 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.267 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.267 CC test/nvme/startup/startup.o 00:02:19.267 CC test/nvme/boot_partition/boot_partition.o 00:02:19.526 LINK spdk_nvme_identify 00:02:19.526 CC test/blobfs/mkfs/mkfs.o 00:02:19.526 CC test/accel/dif/dif.o 00:02:19.526 LINK spdk_top 00:02:19.526 LINK idxd_perf 00:02:19.526 CC test/lvol/esnap/esnap.o 00:02:19.526 CC examples/nvme/hello_world/hello_world.o 00:02:19.526 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.526 CC examples/nvme/abort/abort.o 00:02:19.526 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.526 CC examples/nvme/reconnect/reconnect.o 00:02:19.526 LINK boot_partition 00:02:19.526 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.526 CC examples/nvme/arbitration/arbitration.o 00:02:19.526 CC examples/nvme/hotplug/hotplug.o 00:02:19.785 LINK startup 00:02:19.785 LINK err_injection 00:02:19.785 CC examples/accel/perf/accel_perf.o 00:02:19.785 LINK doorbell_aers 00:02:19.785 CC examples/blob/cli/blobcli.o 00:02:19.785 LINK fused_ordering 00:02:19.785 LINK connect_stress 00:02:19.785 LINK reserve 00:02:19.785 CC examples/blob/hello_world/hello_blob.o 00:02:19.785 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:19.785 LINK sgl 00:02:19.785 LINK mkfs 00:02:19.785 LINK nvme_dp 00:02:19.785 LINK overhead 00:02:19.785 LINK memory_ut 00:02:19.785 LINK fdp 00:02:20.044 LINK simple_copy 00:02:20.044 LINK pmr_persistence 00:02:20.044 LINK reset 00:02:20.044 LINK hotplug 00:02:20.044 LINK aer 00:02:20.044 LINK nvme_compliance 00:02:20.044 LINK cmb_copy 00:02:20.044 LINK hello_blob 00:02:20.044 LINK hello_world 00:02:20.044 LINK hello_fsdev 00:02:20.302 LINK abort 00:02:20.302 LINK reconnect 00:02:20.302 LINK arbitration 00:02:20.302 LINK accel_perf 00:02:20.302 LINK blobcli 00:02:20.560 LINK nvme_manage 00:02:20.560 LINK dif 00:02:20.819 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.819 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.077 CC test/bdev/bdevio/bdevio.o 00:02:21.077 LINK hello_bdev 00:02:21.077 LINK iscsi_fuzz 00:02:21.336 LINK cuse 00:02:21.336 LINK bdevio 00:02:21.903 LINK bdevperf 00:02:22.161 CC examples/nvmf/nvmf/nvmf.o 00:02:22.728 LINK nvmf 00:02:26.919 LINK esnap 00:02:26.919 00:02:26.919 real 1m21.120s 00:02:26.919 user 13m8.883s 00:02:26.919 sys 2m36.012s 00:02:26.919 09:02:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:26.919 09:02:31 make -- common/autotest_common.sh@10 -- $ set +x 00:02:26.919 ************************************ 00:02:26.919 END TEST make 00:02:26.919 ************************************ 00:02:26.919 09:02:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.919 09:02:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.919 09:02:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.919 09:02:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 09:02:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.919 09:02:31 -- pm/common@44 -- $ pid=2741552 00:02:26.919 09:02:31 -- pm/common@50 -- $ kill -TERM 2741552 00:02:26.919 09:02:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 09:02:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.919 09:02:31 -- pm/common@44 -- $ pid=2741554 00:02:26.919 09:02:31 -- pm/common@50 -- $ kill -TERM 2741554 00:02:26.919 09:02:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 09:02:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.919 09:02:31 -- pm/common@44 -- $ pid=2741556 00:02:26.919 09:02:31 -- pm/common@50 -- $ kill -TERM 2741556 00:02:26.919 09:02:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 09:02:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.919 09:02:31 -- pm/common@44 -- $ pid=2741586 00:02:26.919 09:02:31 -- pm/common@50 -- $ sudo -E kill -TERM 2741586 00:02:26.919 09:02:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:26.919 09:02:31 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.919 09:02:31 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:26.919 09:02:31 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:26.919 09:02:31 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:26.919 09:02:31 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:26.919 09:02:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:26.920 09:02:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:26.920 09:02:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:26.920 09:02:31 -- scripts/common.sh@336 -- # IFS=.-: 00:02:26.920 09:02:31 -- scripts/common.sh@336 -- # read -ra ver1 00:02:26.920 09:02:31 -- scripts/common.sh@337 -- # IFS=.-: 00:02:26.920 09:02:31 -- scripts/common.sh@337 -- # read -ra ver2 00:02:26.920 09:02:31 -- scripts/common.sh@338 -- # local 'op=<' 00:02:26.920 09:02:31 -- scripts/common.sh@340 -- # ver1_l=2 00:02:26.920 09:02:31 -- scripts/common.sh@341 -- # ver2_l=1 00:02:26.920 09:02:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:26.920 09:02:31 -- scripts/common.sh@344 -- # case "$op" in 00:02:26.920 09:02:31 -- scripts/common.sh@345 -- # : 1 00:02:26.920 09:02:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:26.920 09:02:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:26.920 09:02:31 -- scripts/common.sh@365 -- # decimal 1 00:02:26.920 09:02:31 -- scripts/common.sh@353 -- # local d=1 00:02:26.920 09:02:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:26.920 09:02:31 -- scripts/common.sh@355 -- # echo 1 00:02:26.920 09:02:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:26.920 09:02:31 -- scripts/common.sh@366 -- # decimal 2 00:02:26.920 09:02:31 -- scripts/common.sh@353 -- # local d=2 00:02:26.920 09:02:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:26.920 09:02:31 -- scripts/common.sh@355 -- # echo 2 00:02:26.920 09:02:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:26.920 09:02:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:26.920 09:02:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:26.920 09:02:31 -- scripts/common.sh@368 -- # return 0 00:02:26.920 09:02:31 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:26.920 09:02:31 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:26.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.920 --rc genhtml_branch_coverage=1 00:02:26.920 --rc genhtml_function_coverage=1 00:02:26.920 --rc genhtml_legend=1 00:02:26.920 --rc geninfo_all_blocks=1 00:02:26.920 --rc geninfo_unexecuted_blocks=1 00:02:26.920 00:02:26.920 ' 00:02:26.920 09:02:31 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:26.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.920 --rc genhtml_branch_coverage=1 00:02:26.920 --rc genhtml_function_coverage=1 00:02:26.920 --rc genhtml_legend=1 00:02:26.920 --rc geninfo_all_blocks=1 00:02:26.920 --rc geninfo_unexecuted_blocks=1 00:02:26.920 00:02:26.920 ' 00:02:26.920 09:02:31 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:26.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.920 --rc genhtml_branch_coverage=1 00:02:26.920 --rc genhtml_function_coverage=1 00:02:26.920 --rc genhtml_legend=1 00:02:26.920 --rc geninfo_all_blocks=1 00:02:26.920 --rc geninfo_unexecuted_blocks=1 00:02:26.920 00:02:26.920 ' 00:02:26.920 09:02:31 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:26.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:26.920 --rc genhtml_branch_coverage=1 00:02:26.920 --rc genhtml_function_coverage=1 00:02:26.920 --rc genhtml_legend=1 00:02:26.920 --rc geninfo_all_blocks=1 00:02:26.920 --rc geninfo_unexecuted_blocks=1 00:02:26.920 00:02:26.920 ' 00:02:26.920 09:02:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.920 09:02:31 -- nvmf/common.sh@7 -- # uname -s 00:02:26.920 09:02:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.920 09:02:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.920 09:02:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.920 09:02:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.920 09:02:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.920 09:02:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.920 09:02:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.920 09:02:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.920 09:02:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.920 09:02:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.920 09:02:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:26.920 09:02:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:26.920 09:02:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.920 09:02:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.920 09:02:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:26.920 09:02:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.920 09:02:31 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:26.920 09:02:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:26.920 09:02:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.920 09:02:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.920 09:02:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.920 09:02:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.920 09:02:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.920 09:02:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.920 09:02:31 -- paths/export.sh@5 -- # export PATH 00:02:26.920 09:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.920 09:02:31 -- nvmf/common.sh@51 -- # : 0 00:02:26.920 09:02:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:26.920 09:02:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:26.920 09:02:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.920 09:02:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.920 09:02:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.920 09:02:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:26.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:26.920 09:02:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:26.920 09:02:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:26.920 09:02:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:26.920 09:02:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.920 09:02:31 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.920 09:02:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.920 09:02:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.920 09:02:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.920 09:02:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.920 09:02:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:26.920 09:02:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.920 09:02:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.920 09:02:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.920 09:02:31 -- spdk/autotest.sh@48 -- # udevadm_pid=2801892 00:02:26.920 09:02:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.920 09:02:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.920 09:02:31 -- pm/common@17 -- # local monitor 00:02:26.920 09:02:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.920 09:02:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.920 09:02:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.920 09:02:31 -- pm/common@21 -- # date +%s 00:02:26.920 09:02:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.920 09:02:31 -- pm/common@21 -- # date +%s 00:02:26.920 09:02:31 -- pm/common@25 -- # sleep 1 00:02:26.920 09:02:31 -- pm/common@21 -- # date +%s 00:02:26.920 09:02:31 -- pm/common@21 -- # date +%s 00:02:26.920 09:02:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731830551 00:02:26.920 09:02:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731830551 00:02:26.920 09:02:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731830551 00:02:26.920 09:02:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731830551 00:02:26.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731830551_collect-cpu-load.pm.log 00:02:26.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731830551_collect-vmstat.pm.log 00:02:26.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731830551_collect-cpu-temp.pm.log 00:02:26.920 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731830551_collect-bmc-pm.bmc.pm.log 00:02:28.300 09:02:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.300 09:02:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.301 09:02:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:28.301 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:02:28.301 09:02:32 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.301 09:02:32 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:28.301 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:02:28.301 09:02:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.301 09:02:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.301 09:02:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.301 09:02:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.301 09:02:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.301 09:02:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.301 09:02:32 -- common/autotest_common.sh@1457 -- # uname 00:02:28.301 09:02:32 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:28.301 09:02:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.301 09:02:32 -- common/autotest_common.sh@1477 -- # uname 00:02:28.301 09:02:32 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:28.301 09:02:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:28.301 09:02:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:28.301 lcov: LCOV version 1.15 00:02:28.301 09:02:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:00.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:00.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:04.577 09:03:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:04.577 09:03:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:04.577 09:03:08 -- common/autotest_common.sh@10 -- # set +x 00:03:04.577 09:03:08 -- spdk/autotest.sh@78 -- # rm -f 00:03:04.577 09:03:08 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.144 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:05.144 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:05.144 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:05.144 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:05.144 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:05.144 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:05.144 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:05.144 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:05.144 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:05.144 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:05.402 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:05.403 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:05.403 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:05.403 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:05.403 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:05.403 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:05.403 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:05.403 09:03:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:05.403 09:03:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:05.403 09:03:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:05.403 09:03:10 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:05.403 09:03:10 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:05.403 09:03:10 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:05.403 09:03:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:05.403 09:03:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.403 09:03:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:05.403 09:03:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:05.403 09:03:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.403 09:03:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:05.403 09:03:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:05.403 09:03:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:05.403 09:03:10 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.403 No valid GPT data, bailing 00:03:05.403 09:03:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.403 09:03:10 -- scripts/common.sh@394 -- # pt= 00:03:05.403 09:03:10 -- scripts/common.sh@395 -- # return 1 00:03:05.403 09:03:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.403 1+0 records in 00:03:05.403 1+0 records out 00:03:05.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00226241 s, 463 MB/s 00:03:05.403 09:03:10 -- spdk/autotest.sh@105 -- # sync 00:03:05.403 09:03:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.403 09:03:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.403 09:03:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:07.936 09:03:12 -- spdk/autotest.sh@111 -- # uname -s 00:03:07.936 09:03:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:07.936 09:03:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:07.936 09:03:12 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:08.872 Hugepages 00:03:08.872 node hugesize free / total 00:03:08.872 node0 1048576kB 0 / 0 00:03:08.872 node0 2048kB 0 / 0 00:03:08.872 node1 1048576kB 0 / 0 00:03:08.872 node1 2048kB 0 / 0 00:03:08.872 00:03:08.872 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.872 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:08.872 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:08.872 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:08.872 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:08.872 09:03:13 -- spdk/autotest.sh@117 -- # uname -s 00:03:08.872 09:03:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:08.872 09:03:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:08.872 09:03:13 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.250 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:10.250 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:10.250 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.187 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.187 09:03:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:12.124 09:03:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:12.124 09:03:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:12.124 09:03:17 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:12.124 09:03:17 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:12.124 09:03:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:12.124 09:03:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:12.124 09:03:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:12.124 09:03:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:12.124 09:03:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:12.124 09:03:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:12.124 09:03:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:12.124 09:03:17 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.505 Waiting for block devices as requested 00:03:13.505 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:13.505 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:13.505 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:13.787 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:13.787 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:13.787 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:13.787 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:14.078 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:14.078 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:14.078 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:14.078 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:14.078 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:14.347 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:14.347 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:14.347 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:14.347 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:14.606 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:14.606 09:03:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:14.606 09:03:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:14.606 09:03:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:14.606 09:03:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:14.606 09:03:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:14.606 09:03:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:14.606 09:03:19 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:14.606 09:03:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:14.606 09:03:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:14.606 09:03:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:14.606 09:03:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:14.606 09:03:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:14.606 09:03:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:14.606 09:03:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:14.606 09:03:19 -- common/autotest_common.sh@1543 -- # continue 00:03:14.606 09:03:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:14.606 09:03:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:14.606 09:03:19 -- common/autotest_common.sh@10 -- # set +x 00:03:14.606 09:03:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:14.606 09:03:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.606 09:03:19 -- common/autotest_common.sh@10 -- # set +x 00:03:14.607 09:03:19 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.983 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:15.983 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:15.983 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:16.922 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.922 09:03:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:16.922 09:03:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:16.922 09:03:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.922 09:03:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:16.922 09:03:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:16.922 09:03:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:16.922 09:03:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:16.922 09:03:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:16.922 09:03:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:16.922 09:03:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:16.922 09:03:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:16.922 09:03:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:16.922 09:03:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:16.922 09:03:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.922 09:03:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.922 09:03:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:17.180 09:03:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:17.180 09:03:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:17.180 09:03:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:17.180 09:03:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:17.180 09:03:21 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:17.180 09:03:21 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:17.180 09:03:21 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:17.180 09:03:21 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:17.180 09:03:21 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:17.180 09:03:21 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:17.180 09:03:21 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2812898 00:03:17.180 09:03:21 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:17.180 09:03:21 -- common/autotest_common.sh@1585 -- # waitforlisten 2812898 00:03:17.180 09:03:21 -- common/autotest_common.sh@835 -- # '[' -z 2812898 ']' 00:03:17.180 09:03:21 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:17.180 09:03:21 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:17.180 09:03:21 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:17.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:17.180 09:03:21 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:17.180 09:03:21 -- common/autotest_common.sh@10 -- # set +x 00:03:17.180 [2024-11-17 09:03:22.108564] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:17.180 [2024-11-17 09:03:22.108716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812898 ] 00:03:17.439 [2024-11-17 09:03:22.252465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:17.439 [2024-11-17 09:03:22.391045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:18.373 09:03:23 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:18.373 09:03:23 -- common/autotest_common.sh@868 -- # return 0 00:03:18.373 09:03:23 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:18.373 09:03:23 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:18.373 09:03:23 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:21.657 nvme0n1 00:03:21.657 09:03:26 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:21.915 [2024-11-17 09:03:26.752765] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:21.915 [2024-11-17 09:03:26.752843] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:21.915 request: 00:03:21.915 { 00:03:21.915 "nvme_ctrlr_name": "nvme0", 00:03:21.915 "password": "test", 00:03:21.915 "method": "bdev_nvme_opal_revert", 00:03:21.915 "req_id": 1 00:03:21.915 } 00:03:21.915 Got JSON-RPC error response 00:03:21.915 response: 00:03:21.915 { 00:03:21.915 "code": -32603, 00:03:21.915 "message": "Internal error" 00:03:21.915 } 00:03:21.915 09:03:26 -- common/autotest_common.sh@1591 -- # true 00:03:21.915 09:03:26 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:21.915 09:03:26 -- common/autotest_common.sh@1595 -- # killprocess 2812898 00:03:21.915 09:03:26 -- common/autotest_common.sh@954 -- # '[' -z 2812898 ']' 00:03:21.915 09:03:26 -- common/autotest_common.sh@958 -- # kill -0 2812898 00:03:21.915 09:03:26 -- common/autotest_common.sh@959 -- # uname 00:03:21.915 09:03:26 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:21.915 09:03:26 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812898 00:03:21.915 09:03:26 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:21.915 09:03:26 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:21.915 09:03:26 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812898' 00:03:21.915 killing process with pid 2812898 00:03:21.915 09:03:26 -- common/autotest_common.sh@973 -- # kill 2812898 00:03:21.915 09:03:26 -- common/autotest_common.sh@978 -- # wait 2812898 00:03:26.098 09:03:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:26.098 09:03:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:26.098 09:03:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:26.098 09:03:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:26.098 09:03:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:26.098 09:03:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.098 09:03:30 -- common/autotest_common.sh@10 -- # set +x 00:03:26.098 09:03:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:26.098 09:03:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:26.098 09:03:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.098 09:03:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.098 09:03:30 -- common/autotest_common.sh@10 -- # set +x 00:03:26.098 ************************************ 00:03:26.098 START TEST env 00:03:26.098 ************************************ 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:26.098 * Looking for test storage... 00:03:26.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:26.098 09:03:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.098 09:03:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.098 09:03:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.098 09:03:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.098 09:03:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.098 09:03:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.098 09:03:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.098 09:03:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.098 09:03:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.098 09:03:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.098 09:03:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.098 09:03:30 env -- scripts/common.sh@344 -- # case "$op" in 00:03:26.098 09:03:30 env -- scripts/common.sh@345 -- # : 1 00:03:26.098 09:03:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.098 09:03:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.098 09:03:30 env -- scripts/common.sh@365 -- # decimal 1 00:03:26.098 09:03:30 env -- scripts/common.sh@353 -- # local d=1 00:03:26.098 09:03:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.098 09:03:30 env -- scripts/common.sh@355 -- # echo 1 00:03:26.098 09:03:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.098 09:03:30 env -- scripts/common.sh@366 -- # decimal 2 00:03:26.098 09:03:30 env -- scripts/common.sh@353 -- # local d=2 00:03:26.098 09:03:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.098 09:03:30 env -- scripts/common.sh@355 -- # echo 2 00:03:26.098 09:03:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.098 09:03:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.098 09:03:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.098 09:03:30 env -- scripts/common.sh@368 -- # return 0 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:26.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.098 --rc genhtml_branch_coverage=1 00:03:26.098 --rc genhtml_function_coverage=1 00:03:26.098 --rc genhtml_legend=1 00:03:26.098 --rc geninfo_all_blocks=1 00:03:26.098 --rc geninfo_unexecuted_blocks=1 00:03:26.098 00:03:26.098 ' 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:26.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.098 --rc genhtml_branch_coverage=1 00:03:26.098 --rc genhtml_function_coverage=1 00:03:26.098 --rc genhtml_legend=1 00:03:26.098 --rc geninfo_all_blocks=1 00:03:26.098 --rc geninfo_unexecuted_blocks=1 00:03:26.098 00:03:26.098 ' 00:03:26.098 09:03:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:26.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.098 --rc genhtml_branch_coverage=1 00:03:26.098 --rc genhtml_function_coverage=1 00:03:26.099 --rc genhtml_legend=1 00:03:26.099 --rc geninfo_all_blocks=1 00:03:26.099 --rc geninfo_unexecuted_blocks=1 00:03:26.099 00:03:26.099 ' 00:03:26.099 09:03:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:26.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.099 --rc genhtml_branch_coverage=1 00:03:26.099 --rc genhtml_function_coverage=1 00:03:26.099 --rc genhtml_legend=1 00:03:26.099 --rc geninfo_all_blocks=1 00:03:26.099 --rc geninfo_unexecuted_blocks=1 00:03:26.099 00:03:26.099 ' 00:03:26.099 09:03:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:26.099 09:03:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.099 09:03:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.099 09:03:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.099 ************************************ 00:03:26.099 START TEST env_memory 00:03:26.099 ************************************ 00:03:26.099 09:03:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:26.099 00:03:26.099 00:03:26.099 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.099 http://cunit.sourceforge.net/ 00:03:26.099 00:03:26.099 00:03:26.099 Suite: memory 00:03:26.099 Test: alloc and free memory map ...[2024-11-17 09:03:30.767044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:26.099 passed 00:03:26.099 Test: mem map translation ...[2024-11-17 09:03:30.810546] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:26.099 [2024-11-17 09:03:30.810606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:26.099 [2024-11-17 09:03:30.810698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:26.099 [2024-11-17 09:03:30.810729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:26.099 passed 00:03:26.099 Test: mem map registration ...[2024-11-17 09:03:30.875976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:26.099 [2024-11-17 09:03:30.876019] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:26.099 passed 00:03:26.099 Test: mem map adjacent registrations ...passed 00:03:26.099 00:03:26.099 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.099 suites 1 1 n/a 0 0 00:03:26.099 tests 4 4 4 0 0 00:03:26.099 asserts 152 152 152 0 n/a 00:03:26.099 00:03:26.099 Elapsed time = 0.231 seconds 00:03:26.099 00:03:26.099 real 0m0.250s 00:03:26.099 user 0m0.233s 00:03:26.099 sys 0m0.015s 00:03:26.099 09:03:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.099 09:03:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:26.099 ************************************ 00:03:26.099 END TEST env_memory 00:03:26.099 ************************************ 00:03:26.099 09:03:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:26.099 09:03:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.099 09:03:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.099 09:03:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.099 ************************************ 00:03:26.099 START TEST env_vtophys 00:03:26.099 ************************************ 00:03:26.099 09:03:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:26.099 EAL: lib.eal log level changed from notice to debug 00:03:26.099 EAL: Detected lcore 0 as core 0 on socket 0 00:03:26.099 EAL: Detected lcore 1 as core 1 on socket 0 00:03:26.099 EAL: Detected lcore 2 as core 2 on socket 0 00:03:26.099 EAL: Detected lcore 3 as core 3 on socket 0 00:03:26.099 EAL: Detected lcore 4 as core 4 on socket 0 00:03:26.099 EAL: Detected lcore 5 as core 5 on socket 0 00:03:26.099 EAL: Detected lcore 6 as core 8 on socket 0 00:03:26.099 EAL: Detected lcore 7 as core 9 on socket 0 00:03:26.099 EAL: Detected lcore 8 as core 10 on socket 0 00:03:26.099 EAL: Detected lcore 9 as core 11 on socket 0 00:03:26.099 EAL: Detected lcore 10 as core 12 on socket 0 00:03:26.099 EAL: Detected lcore 11 as core 13 on socket 0 00:03:26.099 EAL: Detected lcore 12 as core 0 on socket 1 00:03:26.099 EAL: Detected lcore 13 as core 1 on socket 1 00:03:26.099 EAL: Detected lcore 14 as core 2 on socket 1 00:03:26.099 EAL: Detected lcore 15 as core 3 on socket 1 00:03:26.099 EAL: Detected lcore 16 as core 4 on socket 1 00:03:26.099 EAL: Detected lcore 17 as core 5 on socket 1 00:03:26.099 EAL: Detected lcore 18 as core 8 on socket 1 00:03:26.099 EAL: Detected lcore 19 as core 9 on socket 1 00:03:26.099 EAL: Detected lcore 20 as core 10 on socket 1 00:03:26.099 EAL: Detected lcore 21 as core 11 on socket 1 00:03:26.099 EAL: Detected lcore 22 as core 12 on socket 1 00:03:26.099 EAL: Detected lcore 23 as core 13 on socket 1 00:03:26.099 EAL: Detected lcore 24 as core 0 on socket 0 00:03:26.099 EAL: Detected lcore 25 as core 1 on socket 0 00:03:26.099 EAL: Detected lcore 26 as core 2 on socket 0 00:03:26.099 EAL: Detected lcore 27 as core 3 on socket 0 00:03:26.099 EAL: Detected lcore 28 as core 4 on socket 0 00:03:26.099 EAL: Detected lcore 29 as core 5 on socket 0 00:03:26.099 EAL: Detected lcore 30 as core 8 on socket 0 00:03:26.099 EAL: Detected lcore 31 as core 9 on socket 0 00:03:26.099 EAL: Detected lcore 32 as core 10 on socket 0 00:03:26.099 EAL: Detected lcore 33 as core 11 on socket 0 00:03:26.099 EAL: Detected lcore 34 as core 12 on socket 0 00:03:26.099 EAL: Detected lcore 35 as core 13 on socket 0 00:03:26.099 EAL: Detected lcore 36 as core 0 on socket 1 00:03:26.099 EAL: Detected lcore 37 as core 1 on socket 1 00:03:26.099 EAL: Detected lcore 38 as core 2 on socket 1 00:03:26.099 EAL: Detected lcore 39 as core 3 on socket 1 00:03:26.099 EAL: Detected lcore 40 as core 4 on socket 1 00:03:26.099 EAL: Detected lcore 41 as core 5 on socket 1 00:03:26.099 EAL: Detected lcore 42 as core 8 on socket 1 00:03:26.099 EAL: Detected lcore 43 as core 9 on socket 1 00:03:26.099 EAL: Detected lcore 44 as core 10 on socket 1 00:03:26.099 EAL: Detected lcore 45 as core 11 on socket 1 00:03:26.099 EAL: Detected lcore 46 as core 12 on socket 1 00:03:26.099 EAL: Detected lcore 47 as core 13 on socket 1 00:03:26.099 EAL: Maximum logical cores by configuration: 128 00:03:26.099 EAL: Detected CPU lcores: 48 00:03:26.099 EAL: Detected NUMA nodes: 2 00:03:26.099 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:26.099 EAL: Detected shared linkage of DPDK 00:03:26.100 EAL: No shared files mode enabled, IPC will be disabled 00:03:26.358 EAL: Bus pci wants IOVA as 'DC' 00:03:26.358 EAL: Buses did not request a specific IOVA mode. 00:03:26.358 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:26.358 EAL: Selected IOVA mode 'VA' 00:03:26.358 EAL: Probing VFIO support... 00:03:26.358 EAL: IOMMU type 1 (Type 1) is supported 00:03:26.358 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:26.358 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:26.358 EAL: VFIO support initialized 00:03:26.358 EAL: Ask a virtual area of 0x2e000 bytes 00:03:26.358 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:26.358 EAL: Setting up physically contiguous memory... 00:03:26.358 EAL: Setting maximum number of open files to 524288 00:03:26.358 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:26.358 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:26.358 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:26.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.358 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:26.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.358 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.358 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:26.358 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:26.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.358 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:26.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.358 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.358 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:26.358 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:26.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.358 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:26.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.359 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.359 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:26.359 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:26.359 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.359 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:26.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.359 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.359 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:26.359 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:26.359 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:26.359 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.359 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:26.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.359 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.359 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:26.359 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:26.359 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.359 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:26.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.359 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.359 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:26.359 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:26.359 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.359 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:26.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.359 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.359 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:26.359 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:26.359 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.359 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:26.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.359 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.359 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:26.359 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:26.359 EAL: Hugepages will be freed exactly as allocated. 00:03:26.359 EAL: No shared files mode enabled, IPC is disabled 00:03:26.359 EAL: No shared files mode enabled, IPC is disabled 00:03:26.359 EAL: TSC frequency is ~2700000 KHz 00:03:26.359 EAL: Main lcore 0 is ready (tid=7f00e5844a40;cpuset=[0]) 00:03:26.359 EAL: Trying to obtain current memory policy. 00:03:26.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.359 EAL: Restoring previous memory policy: 0 00:03:26.359 EAL: request: mp_malloc_sync 00:03:26.359 EAL: No shared files mode enabled, IPC is disabled 00:03:26.359 EAL: Heap on socket 0 was expanded by 2MB 00:03:26.359 EAL: No shared files mode enabled, IPC is disabled 00:03:26.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:26.359 EAL: Mem event callback 'spdk:(nil)' registered 00:03:26.359 00:03:26.359 00:03:26.359 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.359 http://cunit.sourceforge.net/ 00:03:26.359 00:03:26.359 00:03:26.359 Suite: components_suite 00:03:26.617 Test: vtophys_malloc_test ...passed 00:03:26.617 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:26.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.617 EAL: Restoring previous memory policy: 4 00:03:26.617 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.617 EAL: request: mp_malloc_sync 00:03:26.617 EAL: No shared files mode enabled, IPC is disabled 00:03:26.617 EAL: Heap on socket 0 was expanded by 4MB 00:03:26.617 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.617 EAL: request: mp_malloc_sync 00:03:26.617 EAL: No shared files mode enabled, IPC is disabled 00:03:26.617 EAL: Heap on socket 0 was shrunk by 4MB 00:03:26.617 EAL: Trying to obtain current memory policy. 00:03:26.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.617 EAL: Restoring previous memory policy: 4 00:03:26.617 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.617 EAL: request: mp_malloc_sync 00:03:26.617 EAL: No shared files mode enabled, IPC is disabled 00:03:26.617 EAL: Heap on socket 0 was expanded by 6MB 00:03:26.617 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.617 EAL: request: mp_malloc_sync 00:03:26.617 EAL: No shared files mode enabled, IPC is disabled 00:03:26.617 EAL: Heap on socket 0 was shrunk by 6MB 00:03:26.617 EAL: Trying to obtain current memory policy. 00:03:26.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.875 EAL: Restoring previous memory policy: 4 00:03:26.875 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.875 EAL: request: mp_malloc_sync 00:03:26.875 EAL: No shared files mode enabled, IPC is disabled 00:03:26.875 EAL: Heap on socket 0 was expanded by 10MB 00:03:26.875 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.875 EAL: request: mp_malloc_sync 00:03:26.875 EAL: No shared files mode enabled, IPC is disabled 00:03:26.875 EAL: Heap on socket 0 was shrunk by 10MB 00:03:26.875 EAL: Trying to obtain current memory policy. 00:03:26.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.875 EAL: Restoring previous memory policy: 4 00:03:26.875 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.875 EAL: request: mp_malloc_sync 00:03:26.875 EAL: No shared files mode enabled, IPC is disabled 00:03:26.875 EAL: Heap on socket 0 was expanded by 18MB 00:03:26.875 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.875 EAL: request: mp_malloc_sync 00:03:26.875 EAL: No shared files mode enabled, IPC is disabled 00:03:26.875 EAL: Heap on socket 0 was shrunk by 18MB 00:03:26.875 EAL: Trying to obtain current memory policy. 00:03:26.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.875 EAL: Restoring previous memory policy: 4 00:03:26.875 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.875 EAL: request: mp_malloc_sync 00:03:26.875 EAL: No shared files mode enabled, IPC is disabled 00:03:26.876 EAL: Heap on socket 0 was expanded by 34MB 00:03:26.876 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.876 EAL: request: mp_malloc_sync 00:03:26.876 EAL: No shared files mode enabled, IPC is disabled 00:03:26.876 EAL: Heap on socket 0 was shrunk by 34MB 00:03:26.876 EAL: Trying to obtain current memory policy. 00:03:26.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.876 EAL: Restoring previous memory policy: 4 00:03:26.876 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.876 EAL: request: mp_malloc_sync 00:03:26.876 EAL: No shared files mode enabled, IPC is disabled 00:03:26.876 EAL: Heap on socket 0 was expanded by 66MB 00:03:27.134 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.134 EAL: request: mp_malloc_sync 00:03:27.134 EAL: No shared files mode enabled, IPC is disabled 00:03:27.134 EAL: Heap on socket 0 was shrunk by 66MB 00:03:27.134 EAL: Trying to obtain current memory policy. 00:03:27.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.134 EAL: Restoring previous memory policy: 4 00:03:27.134 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.134 EAL: request: mp_malloc_sync 00:03:27.134 EAL: No shared files mode enabled, IPC is disabled 00:03:27.134 EAL: Heap on socket 0 was expanded by 130MB 00:03:27.392 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.392 EAL: request: mp_malloc_sync 00:03:27.392 EAL: No shared files mode enabled, IPC is disabled 00:03:27.392 EAL: Heap on socket 0 was shrunk by 130MB 00:03:27.650 EAL: Trying to obtain current memory policy. 00:03:27.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.650 EAL: Restoring previous memory policy: 4 00:03:27.650 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.650 EAL: request: mp_malloc_sync 00:03:27.650 EAL: No shared files mode enabled, IPC is disabled 00:03:27.650 EAL: Heap on socket 0 was expanded by 258MB 00:03:28.217 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.217 EAL: request: mp_malloc_sync 00:03:28.217 EAL: No shared files mode enabled, IPC is disabled 00:03:28.217 EAL: Heap on socket 0 was shrunk by 258MB 00:03:28.782 EAL: Trying to obtain current memory policy. 00:03:28.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.782 EAL: Restoring previous memory policy: 4 00:03:28.782 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.782 EAL: request: mp_malloc_sync 00:03:28.783 EAL: No shared files mode enabled, IPC is disabled 00:03:28.783 EAL: Heap on socket 0 was expanded by 514MB 00:03:29.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.974 EAL: request: mp_malloc_sync 00:03:29.974 EAL: No shared files mode enabled, IPC is disabled 00:03:29.974 EAL: Heap on socket 0 was shrunk by 514MB 00:03:30.541 EAL: Trying to obtain current memory policy. 00:03:30.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.107 EAL: Restoring previous memory policy: 4 00:03:31.108 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.108 EAL: request: mp_malloc_sync 00:03:31.108 EAL: No shared files mode enabled, IPC is disabled 00:03:31.108 EAL: Heap on socket 0 was expanded by 1026MB 00:03:33.009 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.009 EAL: request: mp_malloc_sync 00:03:33.009 EAL: No shared files mode enabled, IPC is disabled 00:03:33.009 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:34.910 passed 00:03:34.910 00:03:34.910 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.910 suites 1 1 n/a 0 0 00:03:34.910 tests 2 2 2 0 0 00:03:34.910 asserts 497 497 497 0 n/a 00:03:34.910 00:03:34.910 Elapsed time = 8.259 seconds 00:03:34.910 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.910 EAL: request: mp_malloc_sync 00:03:34.910 EAL: No shared files mode enabled, IPC is disabled 00:03:34.910 EAL: Heap on socket 0 was shrunk by 2MB 00:03:34.910 EAL: No shared files mode enabled, IPC is disabled 00:03:34.910 EAL: No shared files mode enabled, IPC is disabled 00:03:34.910 EAL: No shared files mode enabled, IPC is disabled 00:03:34.910 00:03:34.910 real 0m8.546s 00:03:34.910 user 0m7.409s 00:03:34.910 sys 0m1.067s 00:03:34.910 09:03:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.910 09:03:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:34.910 ************************************ 00:03:34.910 END TEST env_vtophys 00:03:34.910 ************************************ 00:03:34.910 09:03:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:34.910 09:03:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.910 09:03:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.910 09:03:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.910 ************************************ 00:03:34.910 START TEST env_pci 00:03:34.910 ************************************ 00:03:34.910 09:03:39 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:34.910 00:03:34.910 00:03:34.910 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.910 http://cunit.sourceforge.net/ 00:03:34.910 00:03:34.910 00:03:34.910 Suite: pci 00:03:34.910 Test: pci_hook ...[2024-11-17 09:03:39.642221] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2814996 has claimed it 00:03:34.910 EAL: Cannot find device (10000:00:01.0) 00:03:34.910 EAL: Failed to attach device on primary process 00:03:34.910 passed 00:03:34.910 00:03:34.910 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.910 suites 1 1 n/a 0 0 00:03:34.910 tests 1 1 1 0 0 00:03:34.910 asserts 25 25 25 0 n/a 00:03:34.910 00:03:34.910 Elapsed time = 0.041 seconds 00:03:34.910 00:03:34.910 real 0m0.096s 00:03:34.910 user 0m0.038s 00:03:34.910 sys 0m0.057s 00:03:34.910 09:03:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.910 09:03:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:34.910 ************************************ 00:03:34.910 END TEST env_pci 00:03:34.910 ************************************ 00:03:34.910 09:03:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:34.910 09:03:39 env -- env/env.sh@15 -- # uname 00:03:34.910 09:03:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:34.910 09:03:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:34.910 09:03:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:34.910 09:03:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:34.910 09:03:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.910 09:03:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.910 ************************************ 00:03:34.910 START TEST env_dpdk_post_init 00:03:34.910 ************************************ 00:03:34.910 09:03:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:34.910 EAL: Detected CPU lcores: 48 00:03:34.910 EAL: Detected NUMA nodes: 2 00:03:34.910 EAL: Detected shared linkage of DPDK 00:03:34.910 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:34.910 EAL: Selected IOVA mode 'VA' 00:03:34.910 EAL: VFIO support initialized 00:03:34.910 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.169 EAL: Using IOMMU type 1 (Type 1) 00:03:35.169 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:35.169 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:35.169 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:35.169 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:35.170 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:35.429 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:35.429 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:35.429 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:35.997 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:39.277 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:39.277 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:39.534 Starting DPDK initialization... 00:03:39.534 Starting SPDK post initialization... 00:03:39.534 SPDK NVMe probe 00:03:39.534 Attaching to 0000:88:00.0 00:03:39.534 Attached to 0000:88:00.0 00:03:39.534 Cleaning up... 00:03:39.534 00:03:39.534 real 0m4.580s 00:03:39.534 user 0m3.129s 00:03:39.534 sys 0m0.509s 00:03:39.534 09:03:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.534 09:03:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.534 ************************************ 00:03:39.534 END TEST env_dpdk_post_init 00:03:39.534 ************************************ 00:03:39.534 09:03:44 env -- env/env.sh@26 -- # uname 00:03:39.534 09:03:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:39.535 09:03:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.535 09:03:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.535 09:03:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.535 09:03:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.535 ************************************ 00:03:39.535 START TEST env_mem_callbacks 00:03:39.535 ************************************ 00:03:39.535 09:03:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.535 EAL: Detected CPU lcores: 48 00:03:39.535 EAL: Detected NUMA nodes: 2 00:03:39.535 EAL: Detected shared linkage of DPDK 00:03:39.535 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:39.535 EAL: Selected IOVA mode 'VA' 00:03:39.535 EAL: VFIO support initialized 00:03:39.535 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:39.535 00:03:39.535 00:03:39.535 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.535 http://cunit.sourceforge.net/ 00:03:39.535 00:03:39.535 00:03:39.535 Suite: memory 00:03:39.535 Test: test ... 00:03:39.535 register 0x200000200000 2097152 00:03:39.535 malloc 3145728 00:03:39.535 register 0x200000400000 4194304 00:03:39.535 buf 0x2000004fffc0 len 3145728 PASSED 00:03:39.535 malloc 64 00:03:39.535 buf 0x2000004ffec0 len 64 PASSED 00:03:39.535 malloc 4194304 00:03:39.535 register 0x200000800000 6291456 00:03:39.535 buf 0x2000009fffc0 len 4194304 PASSED 00:03:39.535 free 0x2000004fffc0 3145728 00:03:39.535 free 0x2000004ffec0 64 00:03:39.535 unregister 0x200000400000 4194304 PASSED 00:03:39.535 free 0x2000009fffc0 4194304 00:03:39.535 unregister 0x200000800000 6291456 PASSED 00:03:39.535 malloc 8388608 00:03:39.535 register 0x200000400000 10485760 00:03:39.535 buf 0x2000005fffc0 len 8388608 PASSED 00:03:39.535 free 0x2000005fffc0 8388608 00:03:39.535 unregister 0x200000400000 10485760 PASSED 00:03:39.793 passed 00:03:39.793 00:03:39.793 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.793 suites 1 1 n/a 0 0 00:03:39.793 tests 1 1 1 0 0 00:03:39.793 asserts 15 15 15 0 n/a 00:03:39.793 00:03:39.793 Elapsed time = 0.060 seconds 00:03:39.793 00:03:39.793 real 0m0.182s 00:03:39.793 user 0m0.102s 00:03:39.793 sys 0m0.079s 00:03:39.793 09:03:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.793 09:03:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:39.793 ************************************ 00:03:39.793 END TEST env_mem_callbacks 00:03:39.793 ************************************ 00:03:39.793 00:03:39.793 real 0m14.039s 00:03:39.793 user 0m11.115s 00:03:39.793 sys 0m1.932s 00:03:39.793 09:03:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.793 09:03:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.793 ************************************ 00:03:39.793 END TEST env 00:03:39.793 ************************************ 00:03:39.793 09:03:44 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.793 09:03:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.793 09:03:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.793 09:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:39.793 ************************************ 00:03:39.793 START TEST rpc 00:03:39.793 ************************************ 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.793 * Looking for test storage... 00:03:39.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.793 09:03:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.793 09:03:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.793 09:03:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.793 09:03:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.793 09:03:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.793 09:03:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.793 09:03:44 rpc -- scripts/common.sh@345 -- # : 1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.793 09:03:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.793 09:03:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.793 09:03:44 rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.793 09:03:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.793 09:03:44 rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.793 09:03:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.793 09:03:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.793 09:03:44 rpc -- scripts/common.sh@368 -- # return 0 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:39.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.793 --rc genhtml_branch_coverage=1 00:03:39.793 --rc genhtml_function_coverage=1 00:03:39.793 --rc genhtml_legend=1 00:03:39.793 --rc geninfo_all_blocks=1 00:03:39.793 --rc geninfo_unexecuted_blocks=1 00:03:39.793 00:03:39.793 ' 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:39.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.793 --rc genhtml_branch_coverage=1 00:03:39.793 --rc genhtml_function_coverage=1 00:03:39.793 --rc genhtml_legend=1 00:03:39.793 --rc geninfo_all_blocks=1 00:03:39.793 --rc geninfo_unexecuted_blocks=1 00:03:39.793 00:03:39.793 ' 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:39.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.793 --rc genhtml_branch_coverage=1 00:03:39.793 --rc genhtml_function_coverage=1 00:03:39.793 --rc genhtml_legend=1 00:03:39.793 --rc geninfo_all_blocks=1 00:03:39.793 --rc geninfo_unexecuted_blocks=1 00:03:39.793 00:03:39.793 ' 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:39.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.793 --rc genhtml_branch_coverage=1 00:03:39.793 --rc genhtml_function_coverage=1 00:03:39.793 --rc genhtml_legend=1 00:03:39.793 --rc geninfo_all_blocks=1 00:03:39.793 --rc geninfo_unexecuted_blocks=1 00:03:39.793 00:03:39.793 ' 00:03:39.793 09:03:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2815802 00:03:39.793 09:03:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:39.793 09:03:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.793 09:03:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2815802 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 2815802 ']' 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:39.793 09:03:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.051 [2024-11-17 09:03:44.881662] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:40.051 [2024-11-17 09:03:44.881813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815802 ] 00:03:40.051 [2024-11-17 09:03:45.025150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.309 [2024-11-17 09:03:45.162036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:40.309 [2024-11-17 09:03:45.162118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2815802' to capture a snapshot of events at runtime. 00:03:40.309 [2024-11-17 09:03:45.162146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:40.309 [2024-11-17 09:03:45.162169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:40.309 [2024-11-17 09:03:45.162200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2815802 for offline analysis/debug. 00:03:40.309 [2024-11-17 09:03:45.163774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.245 09:03:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.245 09:03:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:41.245 09:03:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.245 09:03:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.245 09:03:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:41.245 09:03:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:41.245 09:03:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.245 09:03:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.245 09:03:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.245 ************************************ 00:03:41.245 START TEST rpc_integrity 00:03:41.245 ************************************ 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.245 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.245 { 00:03:41.245 "name": "Malloc0", 00:03:41.245 "aliases": [ 00:03:41.245 "c251abb2-3805-4f56-9e71-53806d912f2d" 00:03:41.245 ], 00:03:41.245 "product_name": "Malloc disk", 00:03:41.245 "block_size": 512, 00:03:41.245 "num_blocks": 16384, 00:03:41.245 "uuid": "c251abb2-3805-4f56-9e71-53806d912f2d", 00:03:41.245 "assigned_rate_limits": { 00:03:41.245 "rw_ios_per_sec": 0, 00:03:41.245 "rw_mbytes_per_sec": 0, 00:03:41.245 "r_mbytes_per_sec": 0, 00:03:41.245 "w_mbytes_per_sec": 0 00:03:41.245 }, 00:03:41.245 "claimed": false, 00:03:41.245 "zoned": false, 00:03:41.245 "supported_io_types": { 00:03:41.245 "read": true, 00:03:41.245 "write": true, 00:03:41.245 "unmap": true, 00:03:41.245 "flush": true, 00:03:41.245 "reset": true, 00:03:41.245 "nvme_admin": false, 00:03:41.245 "nvme_io": false, 00:03:41.245 "nvme_io_md": false, 00:03:41.245 "write_zeroes": true, 00:03:41.245 "zcopy": true, 00:03:41.245 "get_zone_info": false, 00:03:41.245 "zone_management": false, 00:03:41.245 "zone_append": false, 00:03:41.245 "compare": false, 00:03:41.245 "compare_and_write": false, 00:03:41.245 "abort": true, 00:03:41.245 "seek_hole": false, 00:03:41.245 "seek_data": false, 00:03:41.245 "copy": true, 00:03:41.245 "nvme_iov_md": false 00:03:41.245 }, 00:03:41.245 "memory_domains": [ 00:03:41.245 { 00:03:41.245 "dma_device_id": "system", 00:03:41.245 "dma_device_type": 1 00:03:41.245 }, 00:03:41.245 { 00:03:41.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.245 "dma_device_type": 2 00:03:41.245 } 00:03:41.245 ], 00:03:41.245 "driver_specific": {} 00:03:41.245 } 00:03:41.245 ]' 00:03:41.245 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.504 [2024-11-17 09:03:46.269160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:41.504 [2024-11-17 09:03:46.269233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.504 [2024-11-17 09:03:46.269281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:03:41.504 [2024-11-17 09:03:46.269307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.504 [2024-11-17 09:03:46.272192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.504 [2024-11-17 09:03:46.272232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.504 Passthru0 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.504 { 00:03:41.504 "name": "Malloc0", 00:03:41.504 "aliases": [ 00:03:41.504 "c251abb2-3805-4f56-9e71-53806d912f2d" 00:03:41.504 ], 00:03:41.504 "product_name": "Malloc disk", 00:03:41.504 "block_size": 512, 00:03:41.504 "num_blocks": 16384, 00:03:41.504 "uuid": "c251abb2-3805-4f56-9e71-53806d912f2d", 00:03:41.504 "assigned_rate_limits": { 00:03:41.504 "rw_ios_per_sec": 0, 00:03:41.504 "rw_mbytes_per_sec": 0, 00:03:41.504 "r_mbytes_per_sec": 0, 00:03:41.504 "w_mbytes_per_sec": 0 00:03:41.504 }, 00:03:41.504 "claimed": true, 00:03:41.504 "claim_type": "exclusive_write", 00:03:41.504 "zoned": false, 00:03:41.504 "supported_io_types": { 00:03:41.504 "read": true, 00:03:41.504 "write": true, 00:03:41.504 "unmap": true, 00:03:41.504 "flush": true, 00:03:41.504 "reset": true, 00:03:41.504 "nvme_admin": false, 00:03:41.504 "nvme_io": false, 00:03:41.504 "nvme_io_md": false, 00:03:41.504 "write_zeroes": true, 00:03:41.504 "zcopy": true, 00:03:41.504 "get_zone_info": false, 00:03:41.504 "zone_management": false, 00:03:41.504 "zone_append": false, 00:03:41.504 "compare": false, 00:03:41.504 "compare_and_write": false, 00:03:41.504 "abort": true, 00:03:41.504 "seek_hole": false, 00:03:41.504 "seek_data": false, 00:03:41.504 "copy": true, 00:03:41.504 "nvme_iov_md": false 00:03:41.504 }, 00:03:41.504 "memory_domains": [ 00:03:41.504 { 00:03:41.504 "dma_device_id": "system", 00:03:41.504 "dma_device_type": 1 00:03:41.504 }, 00:03:41.504 { 00:03:41.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.504 "dma_device_type": 2 00:03:41.504 } 00:03:41.504 ], 00:03:41.504 "driver_specific": {} 00:03:41.504 }, 00:03:41.504 { 00:03:41.504 "name": "Passthru0", 00:03:41.504 "aliases": [ 00:03:41.504 "579a215c-9bbf-53f1-abf6-6d41c55d9b5c" 00:03:41.504 ], 00:03:41.504 "product_name": "passthru", 00:03:41.504 "block_size": 512, 00:03:41.504 "num_blocks": 16384, 00:03:41.504 "uuid": "579a215c-9bbf-53f1-abf6-6d41c55d9b5c", 00:03:41.504 "assigned_rate_limits": { 00:03:41.504 "rw_ios_per_sec": 0, 00:03:41.504 "rw_mbytes_per_sec": 0, 00:03:41.504 "r_mbytes_per_sec": 0, 00:03:41.504 "w_mbytes_per_sec": 0 00:03:41.504 }, 00:03:41.504 "claimed": false, 00:03:41.504 "zoned": false, 00:03:41.504 "supported_io_types": { 00:03:41.504 "read": true, 00:03:41.504 "write": true, 00:03:41.504 "unmap": true, 00:03:41.504 "flush": true, 00:03:41.504 "reset": true, 00:03:41.504 "nvme_admin": false, 00:03:41.504 "nvme_io": false, 00:03:41.504 "nvme_io_md": false, 00:03:41.504 "write_zeroes": true, 00:03:41.504 "zcopy": true, 00:03:41.504 "get_zone_info": false, 00:03:41.504 "zone_management": false, 00:03:41.504 "zone_append": false, 00:03:41.504 "compare": false, 00:03:41.504 "compare_and_write": false, 00:03:41.504 "abort": true, 00:03:41.504 "seek_hole": false, 00:03:41.504 "seek_data": false, 00:03:41.504 "copy": true, 00:03:41.504 "nvme_iov_md": false 00:03:41.504 }, 00:03:41.504 "memory_domains": [ 00:03:41.504 { 00:03:41.504 "dma_device_id": "system", 00:03:41.504 "dma_device_type": 1 00:03:41.504 }, 00:03:41.504 { 00:03:41.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.504 "dma_device_type": 2 00:03:41.504 } 00:03:41.504 ], 00:03:41.504 "driver_specific": { 00:03:41.504 "passthru": { 00:03:41.504 "name": "Passthru0", 00:03:41.504 "base_bdev_name": "Malloc0" 00:03:41.504 } 00:03:41.504 } 00:03:41.504 } 00:03:41.504 ]' 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:41.504 09:03:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.504 00:03:41.504 real 0m0.263s 00:03:41.504 user 0m0.150s 00:03:41.504 sys 0m0.024s 00:03:41.504 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.505 09:03:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.505 ************************************ 00:03:41.505 END TEST rpc_integrity 00:03:41.505 ************************************ 00:03:41.505 09:03:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.505 09:03:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.505 09:03:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.505 09:03:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.505 ************************************ 00:03:41.505 START TEST rpc_plugins 00:03:41.505 ************************************ 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:41.505 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.505 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.505 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.505 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.505 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.505 { 00:03:41.505 "name": "Malloc1", 00:03:41.505 "aliases": [ 00:03:41.505 "3f65c38f-d255-4c8c-badd-cd12dccfd503" 00:03:41.505 ], 00:03:41.505 "product_name": "Malloc disk", 00:03:41.505 "block_size": 4096, 00:03:41.505 "num_blocks": 256, 00:03:41.505 "uuid": "3f65c38f-d255-4c8c-badd-cd12dccfd503", 00:03:41.505 "assigned_rate_limits": { 00:03:41.505 "rw_ios_per_sec": 0, 00:03:41.505 "rw_mbytes_per_sec": 0, 00:03:41.505 "r_mbytes_per_sec": 0, 00:03:41.505 "w_mbytes_per_sec": 0 00:03:41.505 }, 00:03:41.505 "claimed": false, 00:03:41.505 "zoned": false, 00:03:41.505 "supported_io_types": { 00:03:41.505 "read": true, 00:03:41.505 "write": true, 00:03:41.505 "unmap": true, 00:03:41.505 "flush": true, 00:03:41.505 "reset": true, 00:03:41.505 "nvme_admin": false, 00:03:41.505 "nvme_io": false, 00:03:41.505 "nvme_io_md": false, 00:03:41.505 "write_zeroes": true, 00:03:41.505 "zcopy": true, 00:03:41.505 "get_zone_info": false, 00:03:41.505 "zone_management": false, 00:03:41.505 "zone_append": false, 00:03:41.505 "compare": false, 00:03:41.505 "compare_and_write": false, 00:03:41.505 "abort": true, 00:03:41.505 "seek_hole": false, 00:03:41.505 "seek_data": false, 00:03:41.505 "copy": true, 00:03:41.505 "nvme_iov_md": false 00:03:41.505 }, 00:03:41.505 "memory_domains": [ 00:03:41.505 { 00:03:41.505 "dma_device_id": "system", 00:03:41.505 "dma_device_type": 1 00:03:41.505 }, 00:03:41.505 { 00:03:41.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.505 "dma_device_type": 2 00:03:41.505 } 00:03:41.505 ], 00:03:41.505 "driver_specific": {} 00:03:41.505 } 00:03:41.505 ]' 00:03:41.505 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:41.763 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.763 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.763 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.763 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:41.763 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:41.763 09:03:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:41.763 00:03:41.763 real 0m0.122s 00:03:41.763 user 0m0.078s 00:03:41.763 sys 0m0.008s 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.763 09:03:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 ************************************ 00:03:41.763 END TEST rpc_plugins 00:03:41.763 ************************************ 00:03:41.763 09:03:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:41.763 09:03:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.763 09:03:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.763 09:03:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 ************************************ 00:03:41.763 START TEST rpc_trace_cmd_test 00:03:41.763 ************************************ 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.763 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:41.763 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2815802", 00:03:41.763 "tpoint_group_mask": "0x8", 00:03:41.763 "iscsi_conn": { 00:03:41.763 "mask": "0x2", 00:03:41.763 "tpoint_mask": "0x0" 00:03:41.763 }, 00:03:41.763 "scsi": { 00:03:41.763 "mask": "0x4", 00:03:41.763 "tpoint_mask": "0x0" 00:03:41.763 }, 00:03:41.763 "bdev": { 00:03:41.763 "mask": "0x8", 00:03:41.763 "tpoint_mask": "0xffffffffffffffff" 00:03:41.763 }, 00:03:41.763 "nvmf_rdma": { 00:03:41.763 "mask": "0x10", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "nvmf_tcp": { 00:03:41.764 "mask": "0x20", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "ftl": { 00:03:41.764 "mask": "0x40", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "blobfs": { 00:03:41.764 "mask": "0x80", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "dsa": { 00:03:41.764 "mask": "0x200", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "thread": { 00:03:41.764 "mask": "0x400", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "nvme_pcie": { 00:03:41.764 "mask": "0x800", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "iaa": { 00:03:41.764 "mask": "0x1000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "nvme_tcp": { 00:03:41.764 "mask": "0x2000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "bdev_nvme": { 00:03:41.764 "mask": "0x4000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "sock": { 00:03:41.764 "mask": "0x8000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "blob": { 00:03:41.764 "mask": "0x10000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "bdev_raid": { 00:03:41.764 "mask": "0x20000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 }, 00:03:41.764 "scheduler": { 00:03:41.764 "mask": "0x40000", 00:03:41.764 "tpoint_mask": "0x0" 00:03:41.764 } 00:03:41.764 }' 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:41.764 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:42.023 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:42.023 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:42.023 09:03:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:42.023 00:03:42.023 real 0m0.195s 00:03:42.023 user 0m0.174s 00:03:42.023 sys 0m0.015s 00:03:42.023 09:03:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.023 09:03:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 ************************************ 00:03:42.023 END TEST rpc_trace_cmd_test 00:03:42.023 ************************************ 00:03:42.023 09:03:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:42.023 09:03:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:42.023 09:03:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:42.023 09:03:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.023 09:03:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.023 09:03:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 ************************************ 00:03:42.023 START TEST rpc_daemon_integrity 00:03:42.023 ************************************ 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.023 { 00:03:42.023 "name": "Malloc2", 00:03:42.023 "aliases": [ 00:03:42.023 "ef896429-a92d-4c7c-9ac3-01b8b483b0f9" 00:03:42.023 ], 00:03:42.023 "product_name": "Malloc disk", 00:03:42.023 "block_size": 512, 00:03:42.023 "num_blocks": 16384, 00:03:42.023 "uuid": "ef896429-a92d-4c7c-9ac3-01b8b483b0f9", 00:03:42.023 "assigned_rate_limits": { 00:03:42.023 "rw_ios_per_sec": 0, 00:03:42.023 "rw_mbytes_per_sec": 0, 00:03:42.023 "r_mbytes_per_sec": 0, 00:03:42.023 "w_mbytes_per_sec": 0 00:03:42.023 }, 00:03:42.023 "claimed": false, 00:03:42.023 "zoned": false, 00:03:42.023 "supported_io_types": { 00:03:42.023 "read": true, 00:03:42.023 "write": true, 00:03:42.023 "unmap": true, 00:03:42.023 "flush": true, 00:03:42.023 "reset": true, 00:03:42.023 "nvme_admin": false, 00:03:42.023 "nvme_io": false, 00:03:42.023 "nvme_io_md": false, 00:03:42.023 "write_zeroes": true, 00:03:42.023 "zcopy": true, 00:03:42.023 "get_zone_info": false, 00:03:42.023 "zone_management": false, 00:03:42.023 "zone_append": false, 00:03:42.023 "compare": false, 00:03:42.023 "compare_and_write": false, 00:03:42.023 "abort": true, 00:03:42.023 "seek_hole": false, 00:03:42.023 "seek_data": false, 00:03:42.023 "copy": true, 00:03:42.023 "nvme_iov_md": false 00:03:42.023 }, 00:03:42.023 "memory_domains": [ 00:03:42.023 { 00:03:42.023 "dma_device_id": "system", 00:03:42.023 "dma_device_type": 1 00:03:42.023 }, 00:03:42.023 { 00:03:42.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.023 "dma_device_type": 2 00:03:42.023 } 00:03:42.023 ], 00:03:42.023 "driver_specific": {} 00:03:42.023 } 00:03:42.023 ]' 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 [2024-11-17 09:03:46.990339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:42.023 [2024-11-17 09:03:46.990423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.023 [2024-11-17 09:03:46.990467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:03:42.023 [2024-11-17 09:03:46.990490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.023 [2024-11-17 09:03:46.993207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.023 [2024-11-17 09:03:46.993246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.023 Passthru0 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.023 09:03:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.023 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.023 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:42.023 { 00:03:42.023 "name": "Malloc2", 00:03:42.023 "aliases": [ 00:03:42.023 "ef896429-a92d-4c7c-9ac3-01b8b483b0f9" 00:03:42.023 ], 00:03:42.023 "product_name": "Malloc disk", 00:03:42.023 "block_size": 512, 00:03:42.023 "num_blocks": 16384, 00:03:42.023 "uuid": "ef896429-a92d-4c7c-9ac3-01b8b483b0f9", 00:03:42.023 "assigned_rate_limits": { 00:03:42.023 "rw_ios_per_sec": 0, 00:03:42.023 "rw_mbytes_per_sec": 0, 00:03:42.023 "r_mbytes_per_sec": 0, 00:03:42.023 "w_mbytes_per_sec": 0 00:03:42.023 }, 00:03:42.023 "claimed": true, 00:03:42.023 "claim_type": "exclusive_write", 00:03:42.023 "zoned": false, 00:03:42.023 "supported_io_types": { 00:03:42.023 "read": true, 00:03:42.023 "write": true, 00:03:42.023 "unmap": true, 00:03:42.023 "flush": true, 00:03:42.023 "reset": true, 00:03:42.023 "nvme_admin": false, 00:03:42.023 "nvme_io": false, 00:03:42.023 "nvme_io_md": false, 00:03:42.023 "write_zeroes": true, 00:03:42.023 "zcopy": true, 00:03:42.023 "get_zone_info": false, 00:03:42.023 "zone_management": false, 00:03:42.023 "zone_append": false, 00:03:42.023 "compare": false, 00:03:42.023 "compare_and_write": false, 00:03:42.023 "abort": true, 00:03:42.023 "seek_hole": false, 00:03:42.023 "seek_data": false, 00:03:42.023 "copy": true, 00:03:42.023 "nvme_iov_md": false 00:03:42.023 }, 00:03:42.023 "memory_domains": [ 00:03:42.023 { 00:03:42.023 "dma_device_id": "system", 00:03:42.023 "dma_device_type": 1 00:03:42.023 }, 00:03:42.023 { 00:03:42.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.023 "dma_device_type": 2 00:03:42.023 } 00:03:42.023 ], 00:03:42.023 "driver_specific": {} 00:03:42.023 }, 00:03:42.023 { 00:03:42.023 "name": "Passthru0", 00:03:42.023 "aliases": [ 00:03:42.023 "86450631-1bc2-5d4d-b285-eafce1f91475" 00:03:42.023 ], 00:03:42.023 "product_name": "passthru", 00:03:42.023 "block_size": 512, 00:03:42.023 "num_blocks": 16384, 00:03:42.023 "uuid": "86450631-1bc2-5d4d-b285-eafce1f91475", 00:03:42.023 "assigned_rate_limits": { 00:03:42.023 "rw_ios_per_sec": 0, 00:03:42.023 "rw_mbytes_per_sec": 0, 00:03:42.023 "r_mbytes_per_sec": 0, 00:03:42.023 "w_mbytes_per_sec": 0 00:03:42.023 }, 00:03:42.023 "claimed": false, 00:03:42.023 "zoned": false, 00:03:42.023 "supported_io_types": { 00:03:42.023 "read": true, 00:03:42.023 "write": true, 00:03:42.023 "unmap": true, 00:03:42.023 "flush": true, 00:03:42.023 "reset": true, 00:03:42.023 "nvme_admin": false, 00:03:42.023 "nvme_io": false, 00:03:42.023 "nvme_io_md": false, 00:03:42.023 "write_zeroes": true, 00:03:42.023 "zcopy": true, 00:03:42.023 "get_zone_info": false, 00:03:42.023 "zone_management": false, 00:03:42.023 "zone_append": false, 00:03:42.023 "compare": false, 00:03:42.023 "compare_and_write": false, 00:03:42.023 "abort": true, 00:03:42.023 "seek_hole": false, 00:03:42.023 "seek_data": false, 00:03:42.023 "copy": true, 00:03:42.023 "nvme_iov_md": false 00:03:42.023 }, 00:03:42.023 "memory_domains": [ 00:03:42.023 { 00:03:42.023 "dma_device_id": "system", 00:03:42.023 "dma_device_type": 1 00:03:42.023 }, 00:03:42.023 { 00:03:42.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.024 "dma_device_type": 2 00:03:42.024 } 00:03:42.024 ], 00:03:42.024 "driver_specific": { 00:03:42.024 "passthru": { 00:03:42.024 "name": "Passthru0", 00:03:42.024 "base_bdev_name": "Malloc2" 00:03:42.024 } 00:03:42.024 } 00:03:42.024 } 00:03:42.024 ]' 00:03:42.024 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.282 00:03:42.282 real 0m0.258s 00:03:42.282 user 0m0.154s 00:03:42.282 sys 0m0.022s 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.282 09:03:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.282 ************************************ 00:03:42.282 END TEST rpc_daemon_integrity 00:03:42.282 ************************************ 00:03:42.282 09:03:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:42.282 09:03:47 rpc -- rpc/rpc.sh@84 -- # killprocess 2815802 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 2815802 ']' 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@958 -- # kill -0 2815802 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@959 -- # uname 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815802 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815802' 00:03:42.282 killing process with pid 2815802 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@973 -- # kill 2815802 00:03:42.282 09:03:47 rpc -- common/autotest_common.sh@978 -- # wait 2815802 00:03:44.810 00:03:44.810 real 0m4.958s 00:03:44.810 user 0m5.534s 00:03:44.810 sys 0m0.811s 00:03:44.810 09:03:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.810 09:03:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 END TEST rpc 00:03:44.810 ************************************ 00:03:44.810 09:03:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:44.810 09:03:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.810 09:03:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.810 09:03:49 -- common/autotest_common.sh@10 -- # set +x 00:03:44.810 ************************************ 00:03:44.810 START TEST skip_rpc 00:03:44.810 ************************************ 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:44.810 * Looking for test storage... 00:03:44.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.810 09:03:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.810 09:03:49 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.811 --rc genhtml_branch_coverage=1 00:03:44.811 --rc genhtml_function_coverage=1 00:03:44.811 --rc genhtml_legend=1 00:03:44.811 --rc geninfo_all_blocks=1 00:03:44.811 --rc geninfo_unexecuted_blocks=1 00:03:44.811 00:03:44.811 ' 00:03:44.811 09:03:49 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.811 --rc genhtml_branch_coverage=1 00:03:44.811 --rc genhtml_function_coverage=1 00:03:44.811 --rc genhtml_legend=1 00:03:44.811 --rc geninfo_all_blocks=1 00:03:44.811 --rc geninfo_unexecuted_blocks=1 00:03:44.811 00:03:44.811 ' 00:03:44.811 09:03:49 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.811 --rc genhtml_branch_coverage=1 00:03:44.811 --rc genhtml_function_coverage=1 00:03:44.811 --rc genhtml_legend=1 00:03:44.811 --rc geninfo_all_blocks=1 00:03:44.811 --rc geninfo_unexecuted_blocks=1 00:03:44.811 00:03:44.811 ' 00:03:44.811 09:03:49 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.811 --rc genhtml_branch_coverage=1 00:03:44.811 --rc genhtml_function_coverage=1 00:03:44.811 --rc genhtml_legend=1 00:03:44.811 --rc geninfo_all_blocks=1 00:03:44.811 --rc geninfo_unexecuted_blocks=1 00:03:44.811 00:03:44.811 ' 00:03:44.811 09:03:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:44.811 09:03:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:44.811 09:03:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:44.811 09:03:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.811 09:03:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.811 09:03:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.811 ************************************ 00:03:44.811 START TEST skip_rpc 00:03:44.811 ************************************ 00:03:44.811 09:03:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:44.811 09:03:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2816525 00:03:44.811 09:03:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:44.811 09:03:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.811 09:03:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:45.070 [2024-11-17 09:03:49.914482] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:45.070 [2024-11-17 09:03:49.914622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816525 ] 00:03:45.070 [2024-11-17 09:03:50.057291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.329 [2024-11-17 09:03:50.198308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.593 09:03:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:50.593 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:50.593 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:50.593 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:50.593 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2816525 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2816525 ']' 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2816525 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816525 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816525' 00:03:50.594 killing process with pid 2816525 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2816525 00:03:50.594 09:03:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2816525 00:03:52.495 00:03:52.495 real 0m7.449s 00:03:52.495 user 0m6.940s 00:03:52.495 sys 0m0.505s 00:03:52.495 09:03:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.495 09:03:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.495 ************************************ 00:03:52.495 END TEST skip_rpc 00:03:52.495 ************************************ 00:03:52.495 09:03:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:52.495 09:03:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.495 09:03:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.495 09:03:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.495 ************************************ 00:03:52.495 START TEST skip_rpc_with_json 00:03:52.495 ************************************ 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2817479 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2817479 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2817479 ']' 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.495 09:03:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.495 [2024-11-17 09:03:57.419729] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:03:52.495 [2024-11-17 09:03:57.419890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817479 ] 00:03:52.752 [2024-11-17 09:03:57.562858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.752 [2024-11-17 09:03:57.701129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.768 [2024-11-17 09:03:58.651786] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:53.768 request: 00:03:53.768 { 00:03:53.768 "trtype": "tcp", 00:03:53.768 "method": "nvmf_get_transports", 00:03:53.768 "req_id": 1 00:03:53.768 } 00:03:53.768 Got JSON-RPC error response 00:03:53.768 response: 00:03:53.768 { 00:03:53.768 "code": -19, 00:03:53.768 "message": "No such device" 00:03:53.768 } 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.768 [2024-11-17 09:03:58.659936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.768 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.026 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.026 09:03:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.026 { 00:03:54.026 "subsystems": [ 00:03:54.026 { 00:03:54.026 "subsystem": "fsdev", 00:03:54.026 "config": [ 00:03:54.026 { 00:03:54.026 "method": "fsdev_set_opts", 00:03:54.026 "params": { 00:03:54.026 "fsdev_io_pool_size": 65535, 00:03:54.026 "fsdev_io_cache_size": 256 00:03:54.026 } 00:03:54.026 } 00:03:54.026 ] 00:03:54.026 }, 00:03:54.026 { 00:03:54.026 "subsystem": "keyring", 00:03:54.026 "config": [] 00:03:54.026 }, 00:03:54.026 { 00:03:54.026 "subsystem": "iobuf", 00:03:54.026 "config": [ 00:03:54.026 { 00:03:54.026 "method": "iobuf_set_options", 00:03:54.026 "params": { 00:03:54.026 "small_pool_count": 8192, 00:03:54.026 "large_pool_count": 1024, 00:03:54.026 "small_bufsize": 8192, 00:03:54.026 "large_bufsize": 135168, 00:03:54.026 "enable_numa": false 00:03:54.026 } 00:03:54.026 } 00:03:54.026 ] 00:03:54.026 }, 00:03:54.026 { 00:03:54.026 "subsystem": "sock", 00:03:54.026 "config": [ 00:03:54.026 { 00:03:54.026 "method": "sock_set_default_impl", 00:03:54.026 "params": { 00:03:54.026 "impl_name": "posix" 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "sock_impl_set_options", 00:03:54.027 "params": { 00:03:54.027 "impl_name": "ssl", 00:03:54.027 "recv_buf_size": 4096, 00:03:54.027 "send_buf_size": 4096, 00:03:54.027 "enable_recv_pipe": true, 00:03:54.027 "enable_quickack": false, 00:03:54.027 "enable_placement_id": 0, 00:03:54.027 "enable_zerocopy_send_server": true, 00:03:54.027 "enable_zerocopy_send_client": false, 00:03:54.027 "zerocopy_threshold": 0, 00:03:54.027 "tls_version": 0, 00:03:54.027 "enable_ktls": false 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "sock_impl_set_options", 00:03:54.027 "params": { 00:03:54.027 "impl_name": "posix", 00:03:54.027 "recv_buf_size": 2097152, 00:03:54.027 "send_buf_size": 2097152, 00:03:54.027 "enable_recv_pipe": true, 00:03:54.027 "enable_quickack": false, 00:03:54.027 "enable_placement_id": 0, 00:03:54.027 "enable_zerocopy_send_server": true, 00:03:54.027 "enable_zerocopy_send_client": false, 00:03:54.027 "zerocopy_threshold": 0, 00:03:54.027 "tls_version": 0, 00:03:54.027 "enable_ktls": false 00:03:54.027 } 00:03:54.027 } 00:03:54.027 ] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "vmd", 00:03:54.027 "config": [] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "accel", 00:03:54.027 "config": [ 00:03:54.027 { 00:03:54.027 "method": "accel_set_options", 00:03:54.027 "params": { 00:03:54.027 "small_cache_size": 128, 00:03:54.027 "large_cache_size": 16, 00:03:54.027 "task_count": 2048, 00:03:54.027 "sequence_count": 2048, 00:03:54.027 "buf_count": 2048 00:03:54.027 } 00:03:54.027 } 00:03:54.027 ] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "bdev", 00:03:54.027 "config": [ 00:03:54.027 { 00:03:54.027 "method": "bdev_set_options", 00:03:54.027 "params": { 00:03:54.027 "bdev_io_pool_size": 65535, 00:03:54.027 "bdev_io_cache_size": 256, 00:03:54.027 "bdev_auto_examine": true, 00:03:54.027 "iobuf_small_cache_size": 128, 00:03:54.027 "iobuf_large_cache_size": 16 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "bdev_raid_set_options", 00:03:54.027 "params": { 00:03:54.027 "process_window_size_kb": 1024, 00:03:54.027 "process_max_bandwidth_mb_sec": 0 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "bdev_iscsi_set_options", 00:03:54.027 "params": { 00:03:54.027 "timeout_sec": 30 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "bdev_nvme_set_options", 00:03:54.027 "params": { 00:03:54.027 "action_on_timeout": "none", 00:03:54.027 "timeout_us": 0, 00:03:54.027 "timeout_admin_us": 0, 00:03:54.027 "keep_alive_timeout_ms": 10000, 00:03:54.027 "arbitration_burst": 0, 00:03:54.027 "low_priority_weight": 0, 00:03:54.027 "medium_priority_weight": 0, 00:03:54.027 "high_priority_weight": 0, 00:03:54.027 "nvme_adminq_poll_period_us": 10000, 00:03:54.027 "nvme_ioq_poll_period_us": 0, 00:03:54.027 "io_queue_requests": 0, 00:03:54.027 "delay_cmd_submit": true, 00:03:54.027 "transport_retry_count": 4, 00:03:54.027 "bdev_retry_count": 3, 00:03:54.027 "transport_ack_timeout": 0, 00:03:54.027 "ctrlr_loss_timeout_sec": 0, 00:03:54.027 "reconnect_delay_sec": 0, 00:03:54.027 "fast_io_fail_timeout_sec": 0, 00:03:54.027 "disable_auto_failback": false, 00:03:54.027 "generate_uuids": false, 00:03:54.027 "transport_tos": 0, 00:03:54.027 "nvme_error_stat": false, 00:03:54.027 "rdma_srq_size": 0, 00:03:54.027 "io_path_stat": false, 00:03:54.027 "allow_accel_sequence": false, 00:03:54.027 "rdma_max_cq_size": 0, 00:03:54.027 "rdma_cm_event_timeout_ms": 0, 00:03:54.027 "dhchap_digests": [ 00:03:54.027 "sha256", 00:03:54.027 "sha384", 00:03:54.027 "sha512" 00:03:54.027 ], 00:03:54.027 "dhchap_dhgroups": [ 00:03:54.027 "null", 00:03:54.027 "ffdhe2048", 00:03:54.027 "ffdhe3072", 00:03:54.027 "ffdhe4096", 00:03:54.027 "ffdhe6144", 00:03:54.027 "ffdhe8192" 00:03:54.027 ] 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "bdev_nvme_set_hotplug", 00:03:54.027 "params": { 00:03:54.027 "period_us": 100000, 00:03:54.027 "enable": false 00:03:54.027 } 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "method": "bdev_wait_for_examine" 00:03:54.027 } 00:03:54.027 ] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "scsi", 00:03:54.027 "config": null 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "scheduler", 00:03:54.027 "config": [ 00:03:54.027 { 00:03:54.027 "method": "framework_set_scheduler", 00:03:54.027 "params": { 00:03:54.027 "name": "static" 00:03:54.027 } 00:03:54.027 } 00:03:54.027 ] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "vhost_scsi", 00:03:54.027 "config": [] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "vhost_blk", 00:03:54.027 "config": [] 00:03:54.027 }, 00:03:54.027 { 00:03:54.027 "subsystem": "ublk", 00:03:54.027 "config": [] 00:03:54.027 }, 00:03:54.027 { 00:03:54.028 "subsystem": "nbd", 00:03:54.028 "config": [] 00:03:54.028 }, 00:03:54.028 { 00:03:54.028 "subsystem": "nvmf", 00:03:54.028 "config": [ 00:03:54.028 { 00:03:54.028 "method": "nvmf_set_config", 00:03:54.028 "params": { 00:03:54.028 "discovery_filter": "match_any", 00:03:54.028 "admin_cmd_passthru": { 00:03:54.028 "identify_ctrlr": false 00:03:54.028 }, 00:03:54.028 "dhchap_digests": [ 00:03:54.028 "sha256", 00:03:54.028 "sha384", 00:03:54.028 "sha512" 00:03:54.028 ], 00:03:54.028 "dhchap_dhgroups": [ 00:03:54.028 "null", 00:03:54.028 "ffdhe2048", 00:03:54.028 "ffdhe3072", 00:03:54.028 "ffdhe4096", 00:03:54.028 "ffdhe6144", 00:03:54.028 "ffdhe8192" 00:03:54.028 ] 00:03:54.028 } 00:03:54.028 }, 00:03:54.028 { 00:03:54.028 "method": "nvmf_set_max_subsystems", 00:03:54.028 "params": { 00:03:54.028 "max_subsystems": 1024 00:03:54.028 } 00:03:54.028 }, 00:03:54.028 { 00:03:54.028 "method": "nvmf_set_crdt", 00:03:54.028 "params": { 00:03:54.028 "crdt1": 0, 00:03:54.028 "crdt2": 0, 00:03:54.028 "crdt3": 0 00:03:54.028 } 00:03:54.028 }, 00:03:54.028 { 00:03:54.028 "method": "nvmf_create_transport", 00:03:54.028 "params": { 00:03:54.028 "trtype": "TCP", 00:03:54.028 "max_queue_depth": 128, 00:03:54.028 "max_io_qpairs_per_ctrlr": 127, 00:03:54.028 "in_capsule_data_size": 4096, 00:03:54.028 "max_io_size": 131072, 00:03:54.028 "io_unit_size": 131072, 00:03:54.028 "max_aq_depth": 128, 00:03:54.028 "num_shared_buffers": 511, 00:03:54.028 "buf_cache_size": 4294967295, 00:03:54.028 "dif_insert_or_strip": false, 00:03:54.028 "zcopy": false, 00:03:54.028 "c2h_success": true, 00:03:54.028 "sock_priority": 0, 00:03:54.028 "abort_timeout_sec": 1, 00:03:54.028 "ack_timeout": 0, 00:03:54.028 "data_wr_pool_size": 0 00:03:54.028 } 00:03:54.028 } 00:03:54.028 ] 00:03:54.028 }, 00:03:54.028 { 00:03:54.028 "subsystem": "iscsi", 00:03:54.028 "config": [ 00:03:54.028 { 00:03:54.028 "method": "iscsi_set_options", 00:03:54.028 "params": { 00:03:54.028 "node_base": "iqn.2016-06.io.spdk", 00:03:54.028 "max_sessions": 128, 00:03:54.028 "max_connections_per_session": 2, 00:03:54.028 "max_queue_depth": 64, 00:03:54.028 "default_time2wait": 2, 00:03:54.028 "default_time2retain": 20, 00:03:54.028 "first_burst_length": 8192, 00:03:54.028 "immediate_data": true, 00:03:54.028 "allow_duplicated_isid": false, 00:03:54.028 "error_recovery_level": 0, 00:03:54.028 "nop_timeout": 60, 00:03:54.028 "nop_in_interval": 30, 00:03:54.028 "disable_chap": false, 00:03:54.028 "require_chap": false, 00:03:54.028 "mutual_chap": false, 00:03:54.028 "chap_group": 0, 00:03:54.028 "max_large_datain_per_connection": 64, 00:03:54.028 "max_r2t_per_connection": 4, 00:03:54.028 "pdu_pool_size": 36864, 00:03:54.028 "immediate_data_pool_size": 16384, 00:03:54.028 "data_out_pool_size": 2048 00:03:54.028 } 00:03:54.028 } 00:03:54.028 ] 00:03:54.028 } 00:03:54.028 ] 00:03:54.028 } 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2817479 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2817479 ']' 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2817479 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817479 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817479' 00:03:54.028 killing process with pid 2817479 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2817479 00:03:54.028 09:03:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2817479 00:03:56.556 09:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2817906 00:03:56.556 09:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.556 09:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2817906 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2817906 ']' 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2817906 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817906 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817906' 00:04:01.817 killing process with pid 2817906 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2817906 00:04:01.817 09:04:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2817906 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.348 00:04:04.348 real 0m11.427s 00:04:04.348 user 0m10.855s 00:04:04.348 sys 0m1.118s 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.348 ************************************ 00:04:04.348 END TEST skip_rpc_with_json 00:04:04.348 ************************************ 00:04:04.348 09:04:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.348 09:04:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.348 09:04:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.348 09:04:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.348 ************************************ 00:04:04.348 START TEST skip_rpc_with_delay 00:04:04.348 ************************************ 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.348 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.349 [2024-11-17 09:04:08.889102] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.349 00:04:04.349 real 0m0.148s 00:04:04.349 user 0m0.078s 00:04:04.349 sys 0m0.069s 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.349 09:04:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.349 ************************************ 00:04:04.349 END TEST skip_rpc_with_delay 00:04:04.349 ************************************ 00:04:04.349 09:04:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.349 09:04:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.349 09:04:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.349 09:04:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.349 09:04:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.349 09:04:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.349 ************************************ 00:04:04.349 START TEST exit_on_failed_rpc_init 00:04:04.349 ************************************ 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2818879 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2818879 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2818879 ']' 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.349 09:04:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.349 [2024-11-17 09:04:09.085346] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:04.349 [2024-11-17 09:04:09.085529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818879 ] 00:04:04.349 [2024-11-17 09:04:09.221759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.349 [2024-11-17 09:04:09.354686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:05.723 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.723 [2024-11-17 09:04:10.412098] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:05.723 [2024-11-17 09:04:10.412244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819022 ] 00:04:05.723 [2024-11-17 09:04:10.555801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.723 [2024-11-17 09:04:10.692788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.723 [2024-11-17 09:04:10.692973] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.723 [2024-11-17 09:04:10.693025] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.723 [2024-11-17 09:04:10.693049] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2818879 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2818879 ']' 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2818879 00:04:06.288 09:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818879 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818879' 00:04:06.288 killing process with pid 2818879 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2818879 00:04:06.288 09:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2818879 00:04:08.815 00:04:08.815 real 0m4.457s 00:04:08.815 user 0m4.915s 00:04:08.815 sys 0m0.754s 00:04:08.815 09:04:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.815 09:04:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.815 ************************************ 00:04:08.815 END TEST exit_on_failed_rpc_init 00:04:08.815 ************************************ 00:04:08.815 09:04:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.815 00:04:08.815 real 0m23.818s 00:04:08.815 user 0m22.965s 00:04:08.815 sys 0m2.625s 00:04:08.815 09:04:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.815 09:04:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.815 ************************************ 00:04:08.815 END TEST skip_rpc 00:04:08.815 ************************************ 00:04:08.815 09:04:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:08.815 09:04:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.815 09:04:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.815 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:04:08.815 ************************************ 00:04:08.815 START TEST rpc_client 00:04:08.815 ************************************ 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:08.815 * Looking for test storage... 00:04:08.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.815 09:04:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.815 --rc genhtml_branch_coverage=1 00:04:08.815 --rc genhtml_function_coverage=1 00:04:08.815 --rc genhtml_legend=1 00:04:08.815 --rc geninfo_all_blocks=1 00:04:08.815 --rc geninfo_unexecuted_blocks=1 00:04:08.815 00:04:08.815 ' 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.815 --rc genhtml_branch_coverage=1 00:04:08.815 --rc genhtml_function_coverage=1 00:04:08.815 --rc genhtml_legend=1 00:04:08.815 --rc geninfo_all_blocks=1 00:04:08.815 --rc geninfo_unexecuted_blocks=1 00:04:08.815 00:04:08.815 ' 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.815 --rc genhtml_branch_coverage=1 00:04:08.815 --rc genhtml_function_coverage=1 00:04:08.815 --rc genhtml_legend=1 00:04:08.815 --rc geninfo_all_blocks=1 00:04:08.815 --rc geninfo_unexecuted_blocks=1 00:04:08.815 00:04:08.815 ' 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.815 --rc genhtml_branch_coverage=1 00:04:08.815 --rc genhtml_function_coverage=1 00:04:08.815 --rc genhtml_legend=1 00:04:08.815 --rc geninfo_all_blocks=1 00:04:08.815 --rc geninfo_unexecuted_blocks=1 00:04:08.815 00:04:08.815 ' 00:04:08.815 09:04:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:08.815 OK 00:04:08.815 09:04:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:08.815 00:04:08.815 real 0m0.193s 00:04:08.815 user 0m0.120s 00:04:08.815 sys 0m0.082s 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.815 09:04:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:08.815 ************************************ 00:04:08.815 END TEST rpc_client 00:04:08.815 ************************************ 00:04:08.815 09:04:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:08.815 09:04:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.815 09:04:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.815 09:04:13 -- common/autotest_common.sh@10 -- # set +x 00:04:08.815 ************************************ 00:04:08.816 START TEST json_config 00:04:08.816 ************************************ 00:04:08.816 09:04:13 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:08.816 09:04:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.816 09:04:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.816 09:04:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.074 09:04:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.074 09:04:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.074 09:04:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.074 09:04:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.074 09:04:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.074 09:04:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.074 09:04:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.074 09:04:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.074 09:04:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.074 09:04:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.074 09:04:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:09.074 09:04:13 json_config -- scripts/common.sh@345 -- # : 1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.074 09:04:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.074 09:04:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@353 -- # local d=1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.074 09:04:13 json_config -- scripts/common.sh@355 -- # echo 1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.074 09:04:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:09.075 09:04:13 json_config -- scripts/common.sh@353 -- # local d=2 00:04:09.075 09:04:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.075 09:04:13 json_config -- scripts/common.sh@355 -- # echo 2 00:04:09.075 09:04:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.075 09:04:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.075 09:04:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.075 09:04:13 json_config -- scripts/common.sh@368 -- # return 0 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.075 --rc genhtml_branch_coverage=1 00:04:09.075 --rc genhtml_function_coverage=1 00:04:09.075 --rc genhtml_legend=1 00:04:09.075 --rc geninfo_all_blocks=1 00:04:09.075 --rc geninfo_unexecuted_blocks=1 00:04:09.075 00:04:09.075 ' 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.075 --rc genhtml_branch_coverage=1 00:04:09.075 --rc genhtml_function_coverage=1 00:04:09.075 --rc genhtml_legend=1 00:04:09.075 --rc geninfo_all_blocks=1 00:04:09.075 --rc geninfo_unexecuted_blocks=1 00:04:09.075 00:04:09.075 ' 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.075 --rc genhtml_branch_coverage=1 00:04:09.075 --rc genhtml_function_coverage=1 00:04:09.075 --rc genhtml_legend=1 00:04:09.075 --rc geninfo_all_blocks=1 00:04:09.075 --rc geninfo_unexecuted_blocks=1 00:04:09.075 00:04:09.075 ' 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.075 --rc genhtml_branch_coverage=1 00:04:09.075 --rc genhtml_function_coverage=1 00:04:09.075 --rc genhtml_legend=1 00:04:09.075 --rc geninfo_all_blocks=1 00:04:09.075 --rc geninfo_unexecuted_blocks=1 00:04:09.075 00:04:09.075 ' 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.075 09:04:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:09.075 09:04:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.075 09:04:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.075 09:04:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.075 09:04:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.075 09:04:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.075 09:04:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.075 09:04:13 json_config -- paths/export.sh@5 -- # export PATH 00:04:09.075 09:04:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@51 -- # : 0 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:09.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:09.075 09:04:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:09.075 INFO: JSON configuration test init 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.075 09:04:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.075 09:04:13 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:09.075 09:04:13 json_config -- json_config/common.sh@9 -- # local app=target 00:04:09.075 09:04:13 json_config -- json_config/common.sh@10 -- # shift 00:04:09.075 09:04:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:09.075 09:04:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:09.075 09:04:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:09.075 09:04:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.075 09:04:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.076 09:04:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2819619 00:04:09.076 09:04:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:09.076 09:04:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:09.076 Waiting for target to run... 00:04:09.076 09:04:13 json_config -- json_config/common.sh@25 -- # waitforlisten 2819619 /var/tmp/spdk_tgt.sock 00:04:09.076 09:04:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 2819619 ']' 00:04:09.076 09:04:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.076 09:04:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.076 09:04:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.076 09:04:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.076 09:04:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.076 [2024-11-17 09:04:14.018811] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:09.076 [2024-11-17 09:04:14.018958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819619 ] 00:04:09.642 [2024-11-17 09:04:14.452453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.642 [2024-11-17 09:04:14.574583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.206 09:04:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.206 09:04:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:10.206 09:04:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:10.206 00:04:10.206 09:04:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:10.206 09:04:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:10.206 09:04:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.206 09:04:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.206 09:04:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:10.206 09:04:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:10.206 09:04:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.206 09:04:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.206 09:04:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:10.206 09:04:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:10.206 09:04:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:14.388 09:04:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.388 09:04:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:14.388 09:04:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:14.388 09:04:18 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@54 -- # sort 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:14.388 09:04:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.388 09:04:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:14.388 09:04:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.388 09:04:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:14.388 09:04:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.388 09:04:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.645 MallocForNvmf0 00:04:14.646 09:04:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:14.646 09:04:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:14.903 MallocForNvmf1 00:04:14.903 09:04:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:14.903 09:04:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:15.163 [2024-11-17 09:04:19.990836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.163 09:04:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.163 09:04:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.420 09:04:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.421 09:04:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.678 09:04:20 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.678 09:04:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.936 09:04:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.936 09:04:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:16.194 [2024-11-17 09:04:21.074616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.194 09:04:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:16.194 09:04:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.194 09:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.194 09:04:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:16.194 09:04:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.194 09:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.194 09:04:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:16.194 09:04:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.194 09:04:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:16.451 MallocBdevForConfigChangeCheck 00:04:16.451 09:04:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:16.451 09:04:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.451 09:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.451 09:04:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:16.451 09:04:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.016 09:04:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:17.016 INFO: shutting down applications... 00:04:17.016 09:04:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:17.016 09:04:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:17.016 09:04:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:17.016 09:04:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.913 Calling clear_iscsi_subsystem 00:04:18.913 Calling clear_nvmf_subsystem 00:04:18.913 Calling clear_nbd_subsystem 00:04:18.913 Calling clear_ublk_subsystem 00:04:18.913 Calling clear_vhost_blk_subsystem 00:04:18.913 Calling clear_vhost_scsi_subsystem 00:04:18.913 Calling clear_bdev_subsystem 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@352 -- # break 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:18.913 09:04:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:18.913 09:04:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:18.913 09:04:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:18.913 09:04:23 json_config -- json_config/common.sh@35 -- # [[ -n 2819619 ]] 00:04:18.913 09:04:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2819619 00:04:18.913 09:04:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:18.913 09:04:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:18.913 09:04:23 json_config -- json_config/common.sh@41 -- # kill -0 2819619 00:04:18.913 09:04:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.479 09:04:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.479 09:04:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.479 09:04:24 json_config -- json_config/common.sh@41 -- # kill -0 2819619 00:04:19.479 09:04:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.045 09:04:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.045 09:04:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.045 09:04:24 json_config -- json_config/common.sh@41 -- # kill -0 2819619 00:04:20.045 09:04:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.045 09:04:24 json_config -- json_config/common.sh@43 -- # break 00:04:20.045 09:04:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.045 09:04:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.045 SPDK target shutdown done 00:04:20.045 09:04:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:20.045 INFO: relaunching applications... 00:04:20.045 09:04:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.045 09:04:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.045 09:04:24 json_config -- json_config/common.sh@10 -- # shift 00:04:20.045 09:04:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.045 09:04:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.045 09:04:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.045 09:04:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.045 09:04:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.045 09:04:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2821007 00:04:20.045 09:04:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.045 09:04:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.045 Waiting for target to run... 00:04:20.045 09:04:24 json_config -- json_config/common.sh@25 -- # waitforlisten 2821007 /var/tmp/spdk_tgt.sock 00:04:20.045 09:04:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 2821007 ']' 00:04:20.045 09:04:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.045 09:04:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.045 09:04:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.045 09:04:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.045 09:04:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.045 [2024-11-17 09:04:24.986222] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:20.045 [2024-11-17 09:04:24.986406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821007 ] 00:04:20.612 [2024-11-17 09:04:25.591340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.869 [2024-11-17 09:04:25.721202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.057 [2024-11-17 09:04:29.509173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.057 [2024-11-17 09:04:29.541748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.057 09:04:29 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.057 09:04:29 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:25.057 09:04:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.057 00:04:25.057 09:04:29 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:25.057 09:04:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:25.057 INFO: Checking if target configuration is the same... 00:04:25.057 09:04:29 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.057 09:04:29 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:25.057 09:04:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.057 + '[' 2 -ne 2 ']' 00:04:25.057 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:25.057 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:25.057 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.057 +++ basename /dev/fd/62 00:04:25.057 ++ mktemp /tmp/62.XXX 00:04:25.057 + tmp_file_1=/tmp/62.qeN 00:04:25.057 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.057 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.057 + tmp_file_2=/tmp/spdk_tgt_config.json.EAO 00:04:25.057 + ret=0 00:04:25.057 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.057 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.057 + diff -u /tmp/62.qeN /tmp/spdk_tgt_config.json.EAO 00:04:25.057 + echo 'INFO: JSON config files are the same' 00:04:25.057 INFO: JSON config files are the same 00:04:25.057 + rm /tmp/62.qeN /tmp/spdk_tgt_config.json.EAO 00:04:25.057 + exit 0 00:04:25.057 09:04:30 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:25.057 09:04:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:25.057 INFO: changing configuration and checking if this can be detected... 00:04:25.057 09:04:30 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.057 09:04:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:25.315 09:04:30 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.315 09:04:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:25.315 09:04:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.315 + '[' 2 -ne 2 ']' 00:04:25.315 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:25.315 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:25.572 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.572 +++ basename /dev/fd/62 00:04:25.572 ++ mktemp /tmp/62.XXX 00:04:25.572 + tmp_file_1=/tmp/62.Xv0 00:04:25.572 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.572 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.572 + tmp_file_2=/tmp/spdk_tgt_config.json.v1C 00:04:25.572 + ret=0 00:04:25.572 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.830 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.830 + diff -u /tmp/62.Xv0 /tmp/spdk_tgt_config.json.v1C 00:04:25.830 + ret=1 00:04:25.830 + echo '=== Start of file: /tmp/62.Xv0 ===' 00:04:25.830 + cat /tmp/62.Xv0 00:04:25.830 + echo '=== End of file: /tmp/62.Xv0 ===' 00:04:25.830 + echo '' 00:04:25.830 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v1C ===' 00:04:25.830 + cat /tmp/spdk_tgt_config.json.v1C 00:04:25.830 + echo '=== End of file: /tmp/spdk_tgt_config.json.v1C ===' 00:04:25.830 + echo '' 00:04:25.830 + rm /tmp/62.Xv0 /tmp/spdk_tgt_config.json.v1C 00:04:25.830 + exit 1 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:25.830 INFO: configuration change detected. 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@324 -- # [[ -n 2821007 ]] 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.830 09:04:30 json_config -- json_config/json_config.sh@330 -- # killprocess 2821007 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@954 -- # '[' -z 2821007 ']' 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@958 -- # kill -0 2821007 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@959 -- # uname 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.830 09:04:30 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821007 00:04:26.088 09:04:30 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.088 09:04:30 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.088 09:04:30 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821007' 00:04:26.088 killing process with pid 2821007 00:04:26.088 09:04:30 json_config -- common/autotest_common.sh@973 -- # kill 2821007 00:04:26.088 09:04:30 json_config -- common/autotest_common.sh@978 -- # wait 2821007 00:04:28.616 09:04:33 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.616 09:04:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:28.616 09:04:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.616 09:04:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.616 09:04:33 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:28.616 09:04:33 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:28.616 INFO: Success 00:04:28.616 00:04:28.616 real 0m19.529s 00:04:28.616 user 0m21.219s 00:04:28.616 sys 0m3.050s 00:04:28.616 09:04:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.616 09:04:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.616 ************************************ 00:04:28.616 END TEST json_config 00:04:28.616 ************************************ 00:04:28.616 09:04:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.616 09:04:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.616 09:04:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.616 09:04:33 -- common/autotest_common.sh@10 -- # set +x 00:04:28.616 ************************************ 00:04:28.616 START TEST json_config_extra_key 00:04:28.616 ************************************ 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.616 09:04:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.616 --rc genhtml_branch_coverage=1 00:04:28.616 --rc genhtml_function_coverage=1 00:04:28.616 --rc genhtml_legend=1 00:04:28.616 --rc geninfo_all_blocks=1 00:04:28.616 --rc geninfo_unexecuted_blocks=1 00:04:28.616 00:04:28.616 ' 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.616 --rc genhtml_branch_coverage=1 00:04:28.616 --rc genhtml_function_coverage=1 00:04:28.616 --rc genhtml_legend=1 00:04:28.616 --rc geninfo_all_blocks=1 00:04:28.616 --rc geninfo_unexecuted_blocks=1 00:04:28.616 00:04:28.616 ' 00:04:28.616 09:04:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.616 --rc genhtml_branch_coverage=1 00:04:28.616 --rc genhtml_function_coverage=1 00:04:28.616 --rc genhtml_legend=1 00:04:28.616 --rc geninfo_all_blocks=1 00:04:28.617 --rc geninfo_unexecuted_blocks=1 00:04:28.617 00:04:28.617 ' 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.617 --rc genhtml_branch_coverage=1 00:04:28.617 --rc genhtml_function_coverage=1 00:04:28.617 --rc genhtml_legend=1 00:04:28.617 --rc geninfo_all_blocks=1 00:04:28.617 --rc geninfo_unexecuted_blocks=1 00:04:28.617 00:04:28.617 ' 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.617 09:04:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.617 09:04:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.617 09:04:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.617 09:04:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.617 09:04:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.617 09:04:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.617 09:04:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.617 09:04:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:28.617 09:04:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.617 09:04:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:28.617 INFO: launching applications... 00:04:28.617 09:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2822189 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.617 Waiting for target to run... 00:04:28.617 09:04:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2822189 /var/tmp/spdk_tgt.sock 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2822189 ']' 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.617 09:04:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.617 [2024-11-17 09:04:33.564184] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:28.618 [2024-11-17 09:04:33.564327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822189 ] 00:04:29.184 [2024-11-17 09:04:33.993205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.184 [2024-11-17 09:04:34.115363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.116 09:04:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.116 09:04:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:30.116 09:04:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:30.116 00:04:30.117 09:04:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:30.117 INFO: shutting down applications... 00:04:30.117 09:04:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2822189 ]] 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2822189 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:30.117 09:04:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.374 09:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.374 09:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.374 09:04:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:30.374 09:04:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.940 09:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.940 09:04:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.940 09:04:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:30.940 09:04:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.506 09:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.506 09:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.506 09:04:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:31.506 09:04:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.072 09:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.072 09:04:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.072 09:04:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:32.072 09:04:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.639 09:04:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.639 09:04:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.639 09:04:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:32.639 09:04:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2822189 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:32.897 09:04:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:32.897 SPDK target shutdown done 00:04:32.897 09:04:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:32.897 Success 00:04:32.897 00:04:32.897 real 0m4.561s 00:04:32.897 user 0m4.188s 00:04:32.897 sys 0m0.662s 00:04:32.897 09:04:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.897 09:04:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.897 ************************************ 00:04:32.897 END TEST json_config_extra_key 00:04:32.897 ************************************ 00:04:33.157 09:04:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.157 09:04:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.157 09:04:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.157 09:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:33.157 ************************************ 00:04:33.157 START TEST alias_rpc 00:04:33.157 ************************************ 00:04:33.157 09:04:37 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.157 * Looking for test storage... 00:04:33.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:33.157 09:04:37 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.157 09:04:37 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.157 09:04:37 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.157 09:04:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.157 --rc genhtml_branch_coverage=1 00:04:33.157 --rc genhtml_function_coverage=1 00:04:33.157 --rc genhtml_legend=1 00:04:33.157 --rc geninfo_all_blocks=1 00:04:33.157 --rc geninfo_unexecuted_blocks=1 00:04:33.157 00:04:33.157 ' 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.157 --rc genhtml_branch_coverage=1 00:04:33.157 --rc genhtml_function_coverage=1 00:04:33.157 --rc genhtml_legend=1 00:04:33.157 --rc geninfo_all_blocks=1 00:04:33.157 --rc geninfo_unexecuted_blocks=1 00:04:33.157 00:04:33.157 ' 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.157 --rc genhtml_branch_coverage=1 00:04:33.157 --rc genhtml_function_coverage=1 00:04:33.157 --rc genhtml_legend=1 00:04:33.157 --rc geninfo_all_blocks=1 00:04:33.157 --rc geninfo_unexecuted_blocks=1 00:04:33.157 00:04:33.157 ' 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.157 --rc genhtml_branch_coverage=1 00:04:33.157 --rc genhtml_function_coverage=1 00:04:33.157 --rc genhtml_legend=1 00:04:33.157 --rc geninfo_all_blocks=1 00:04:33.157 --rc geninfo_unexecuted_blocks=1 00:04:33.157 00:04:33.157 ' 00:04:33.157 09:04:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:33.157 09:04:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2822786 00:04:33.157 09:04:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.157 09:04:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2822786 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2822786 ']' 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.157 09:04:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.415 [2024-11-17 09:04:38.182130] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:33.415 [2024-11-17 09:04:38.182275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822786 ] 00:04:33.415 [2024-11-17 09:04:38.317120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.709 [2024-11-17 09:04:38.449094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.668 09:04:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.668 09:04:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.668 09:04:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:34.926 09:04:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2822786 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2822786 ']' 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2822786 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822786 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822786' 00:04:34.926 killing process with pid 2822786 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 2822786 00:04:34.926 09:04:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 2822786 00:04:37.456 00:04:37.456 real 0m4.231s 00:04:37.456 user 0m4.410s 00:04:37.456 sys 0m0.690s 00:04:37.456 09:04:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.456 09:04:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.456 ************************************ 00:04:37.456 END TEST alias_rpc 00:04:37.456 ************************************ 00:04:37.456 09:04:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:37.457 09:04:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.457 09:04:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.457 09:04:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.457 09:04:42 -- common/autotest_common.sh@10 -- # set +x 00:04:37.457 ************************************ 00:04:37.457 START TEST spdkcli_tcp 00:04:37.457 ************************************ 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.457 * Looking for test storage... 00:04:37.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.457 09:04:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.457 --rc genhtml_branch_coverage=1 00:04:37.457 --rc genhtml_function_coverage=1 00:04:37.457 --rc genhtml_legend=1 00:04:37.457 --rc geninfo_all_blocks=1 00:04:37.457 --rc geninfo_unexecuted_blocks=1 00:04:37.457 00:04:37.457 ' 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.457 --rc genhtml_branch_coverage=1 00:04:37.457 --rc genhtml_function_coverage=1 00:04:37.457 --rc genhtml_legend=1 00:04:37.457 --rc geninfo_all_blocks=1 00:04:37.457 --rc geninfo_unexecuted_blocks=1 00:04:37.457 00:04:37.457 ' 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.457 --rc genhtml_branch_coverage=1 00:04:37.457 --rc genhtml_function_coverage=1 00:04:37.457 --rc genhtml_legend=1 00:04:37.457 --rc geninfo_all_blocks=1 00:04:37.457 --rc geninfo_unexecuted_blocks=1 00:04:37.457 00:04:37.457 ' 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.457 --rc genhtml_branch_coverage=1 00:04:37.457 --rc genhtml_function_coverage=1 00:04:37.457 --rc genhtml_legend=1 00:04:37.457 --rc geninfo_all_blocks=1 00:04:37.457 --rc geninfo_unexecuted_blocks=1 00:04:37.457 00:04:37.457 ' 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2823383 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.457 09:04:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2823383 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2823383 ']' 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.457 09:04:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.457 [2024-11-17 09:04:42.460014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:37.457 [2024-11-17 09:04:42.460179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823383 ] 00:04:37.716 [2024-11-17 09:04:42.601898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.974 [2024-11-17 09:04:42.742781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.974 [2024-11-17 09:04:42.742784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.909 09:04:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.909 09:04:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:38.909 09:04:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2823520 00:04:38.909 09:04:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.909 09:04:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:39.168 [ 00:04:39.168 "bdev_malloc_delete", 00:04:39.168 "bdev_malloc_create", 00:04:39.168 "bdev_null_resize", 00:04:39.168 "bdev_null_delete", 00:04:39.168 "bdev_null_create", 00:04:39.168 "bdev_nvme_cuse_unregister", 00:04:39.168 "bdev_nvme_cuse_register", 00:04:39.168 "bdev_opal_new_user", 00:04:39.168 "bdev_opal_set_lock_state", 00:04:39.168 "bdev_opal_delete", 00:04:39.168 "bdev_opal_get_info", 00:04:39.168 "bdev_opal_create", 00:04:39.168 "bdev_nvme_opal_revert", 00:04:39.168 "bdev_nvme_opal_init", 00:04:39.168 "bdev_nvme_send_cmd", 00:04:39.168 "bdev_nvme_set_keys", 00:04:39.168 "bdev_nvme_get_path_iostat", 00:04:39.168 "bdev_nvme_get_mdns_discovery_info", 00:04:39.168 "bdev_nvme_stop_mdns_discovery", 00:04:39.168 "bdev_nvme_start_mdns_discovery", 00:04:39.168 "bdev_nvme_set_multipath_policy", 00:04:39.168 "bdev_nvme_set_preferred_path", 00:04:39.168 "bdev_nvme_get_io_paths", 00:04:39.168 "bdev_nvme_remove_error_injection", 00:04:39.168 "bdev_nvme_add_error_injection", 00:04:39.168 "bdev_nvme_get_discovery_info", 00:04:39.168 "bdev_nvme_stop_discovery", 00:04:39.168 "bdev_nvme_start_discovery", 00:04:39.168 "bdev_nvme_get_controller_health_info", 00:04:39.168 "bdev_nvme_disable_controller", 00:04:39.168 "bdev_nvme_enable_controller", 00:04:39.168 "bdev_nvme_reset_controller", 00:04:39.168 "bdev_nvme_get_transport_statistics", 00:04:39.168 "bdev_nvme_apply_firmware", 00:04:39.168 "bdev_nvme_detach_controller", 00:04:39.168 "bdev_nvme_get_controllers", 00:04:39.168 "bdev_nvme_attach_controller", 00:04:39.168 "bdev_nvme_set_hotplug", 00:04:39.168 "bdev_nvme_set_options", 00:04:39.168 "bdev_passthru_delete", 00:04:39.168 "bdev_passthru_create", 00:04:39.168 "bdev_lvol_set_parent_bdev", 00:04:39.168 "bdev_lvol_set_parent", 00:04:39.168 "bdev_lvol_check_shallow_copy", 00:04:39.168 "bdev_lvol_start_shallow_copy", 00:04:39.168 "bdev_lvol_grow_lvstore", 00:04:39.168 "bdev_lvol_get_lvols", 00:04:39.168 "bdev_lvol_get_lvstores", 00:04:39.168 "bdev_lvol_delete", 00:04:39.168 "bdev_lvol_set_read_only", 00:04:39.168 "bdev_lvol_resize", 00:04:39.168 "bdev_lvol_decouple_parent", 00:04:39.168 "bdev_lvol_inflate", 00:04:39.168 "bdev_lvol_rename", 00:04:39.168 "bdev_lvol_clone_bdev", 00:04:39.168 "bdev_lvol_clone", 00:04:39.168 "bdev_lvol_snapshot", 00:04:39.168 "bdev_lvol_create", 00:04:39.168 "bdev_lvol_delete_lvstore", 00:04:39.168 "bdev_lvol_rename_lvstore", 00:04:39.168 "bdev_lvol_create_lvstore", 00:04:39.168 "bdev_raid_set_options", 00:04:39.168 "bdev_raid_remove_base_bdev", 00:04:39.168 "bdev_raid_add_base_bdev", 00:04:39.168 "bdev_raid_delete", 00:04:39.168 "bdev_raid_create", 00:04:39.168 "bdev_raid_get_bdevs", 00:04:39.168 "bdev_error_inject_error", 00:04:39.168 "bdev_error_delete", 00:04:39.168 "bdev_error_create", 00:04:39.168 "bdev_split_delete", 00:04:39.168 "bdev_split_create", 00:04:39.168 "bdev_delay_delete", 00:04:39.168 "bdev_delay_create", 00:04:39.168 "bdev_delay_update_latency", 00:04:39.168 "bdev_zone_block_delete", 00:04:39.168 "bdev_zone_block_create", 00:04:39.168 "blobfs_create", 00:04:39.168 "blobfs_detect", 00:04:39.168 "blobfs_set_cache_size", 00:04:39.168 "bdev_aio_delete", 00:04:39.168 "bdev_aio_rescan", 00:04:39.168 "bdev_aio_create", 00:04:39.168 "bdev_ftl_set_property", 00:04:39.168 "bdev_ftl_get_properties", 00:04:39.168 "bdev_ftl_get_stats", 00:04:39.168 "bdev_ftl_unmap", 00:04:39.168 "bdev_ftl_unload", 00:04:39.168 "bdev_ftl_delete", 00:04:39.168 "bdev_ftl_load", 00:04:39.168 "bdev_ftl_create", 00:04:39.168 "bdev_virtio_attach_controller", 00:04:39.168 "bdev_virtio_scsi_get_devices", 00:04:39.168 "bdev_virtio_detach_controller", 00:04:39.168 "bdev_virtio_blk_set_hotplug", 00:04:39.168 "bdev_iscsi_delete", 00:04:39.168 "bdev_iscsi_create", 00:04:39.168 "bdev_iscsi_set_options", 00:04:39.168 "accel_error_inject_error", 00:04:39.168 "ioat_scan_accel_module", 00:04:39.168 "dsa_scan_accel_module", 00:04:39.168 "iaa_scan_accel_module", 00:04:39.168 "keyring_file_remove_key", 00:04:39.168 "keyring_file_add_key", 00:04:39.168 "keyring_linux_set_options", 00:04:39.168 "fsdev_aio_delete", 00:04:39.168 "fsdev_aio_create", 00:04:39.168 "iscsi_get_histogram", 00:04:39.168 "iscsi_enable_histogram", 00:04:39.168 "iscsi_set_options", 00:04:39.168 "iscsi_get_auth_groups", 00:04:39.168 "iscsi_auth_group_remove_secret", 00:04:39.168 "iscsi_auth_group_add_secret", 00:04:39.168 "iscsi_delete_auth_group", 00:04:39.168 "iscsi_create_auth_group", 00:04:39.168 "iscsi_set_discovery_auth", 00:04:39.168 "iscsi_get_options", 00:04:39.168 "iscsi_target_node_request_logout", 00:04:39.168 "iscsi_target_node_set_redirect", 00:04:39.168 "iscsi_target_node_set_auth", 00:04:39.168 "iscsi_target_node_add_lun", 00:04:39.168 "iscsi_get_stats", 00:04:39.168 "iscsi_get_connections", 00:04:39.168 "iscsi_portal_group_set_auth", 00:04:39.168 "iscsi_start_portal_group", 00:04:39.168 "iscsi_delete_portal_group", 00:04:39.168 "iscsi_create_portal_group", 00:04:39.168 "iscsi_get_portal_groups", 00:04:39.168 "iscsi_delete_target_node", 00:04:39.168 "iscsi_target_node_remove_pg_ig_maps", 00:04:39.168 "iscsi_target_node_add_pg_ig_maps", 00:04:39.168 "iscsi_create_target_node", 00:04:39.168 "iscsi_get_target_nodes", 00:04:39.168 "iscsi_delete_initiator_group", 00:04:39.168 "iscsi_initiator_group_remove_initiators", 00:04:39.168 "iscsi_initiator_group_add_initiators", 00:04:39.168 "iscsi_create_initiator_group", 00:04:39.168 "iscsi_get_initiator_groups", 00:04:39.168 "nvmf_set_crdt", 00:04:39.168 "nvmf_set_config", 00:04:39.168 "nvmf_set_max_subsystems", 00:04:39.168 "nvmf_stop_mdns_prr", 00:04:39.168 "nvmf_publish_mdns_prr", 00:04:39.168 "nvmf_subsystem_get_listeners", 00:04:39.168 "nvmf_subsystem_get_qpairs", 00:04:39.168 "nvmf_subsystem_get_controllers", 00:04:39.168 "nvmf_get_stats", 00:04:39.168 "nvmf_get_transports", 00:04:39.168 "nvmf_create_transport", 00:04:39.168 "nvmf_get_targets", 00:04:39.168 "nvmf_delete_target", 00:04:39.168 "nvmf_create_target", 00:04:39.168 "nvmf_subsystem_allow_any_host", 00:04:39.168 "nvmf_subsystem_set_keys", 00:04:39.168 "nvmf_subsystem_remove_host", 00:04:39.168 "nvmf_subsystem_add_host", 00:04:39.168 "nvmf_ns_remove_host", 00:04:39.168 "nvmf_ns_add_host", 00:04:39.168 "nvmf_subsystem_remove_ns", 00:04:39.168 "nvmf_subsystem_set_ns_ana_group", 00:04:39.168 "nvmf_subsystem_add_ns", 00:04:39.168 "nvmf_subsystem_listener_set_ana_state", 00:04:39.168 "nvmf_discovery_get_referrals", 00:04:39.169 "nvmf_discovery_remove_referral", 00:04:39.169 "nvmf_discovery_add_referral", 00:04:39.169 "nvmf_subsystem_remove_listener", 00:04:39.169 "nvmf_subsystem_add_listener", 00:04:39.169 "nvmf_delete_subsystem", 00:04:39.169 "nvmf_create_subsystem", 00:04:39.169 "nvmf_get_subsystems", 00:04:39.169 "env_dpdk_get_mem_stats", 00:04:39.169 "nbd_get_disks", 00:04:39.169 "nbd_stop_disk", 00:04:39.169 "nbd_start_disk", 00:04:39.169 "ublk_recover_disk", 00:04:39.169 "ublk_get_disks", 00:04:39.169 "ublk_stop_disk", 00:04:39.169 "ublk_start_disk", 00:04:39.169 "ublk_destroy_target", 00:04:39.169 "ublk_create_target", 00:04:39.169 "virtio_blk_create_transport", 00:04:39.169 "virtio_blk_get_transports", 00:04:39.169 "vhost_controller_set_coalescing", 00:04:39.169 "vhost_get_controllers", 00:04:39.169 "vhost_delete_controller", 00:04:39.169 "vhost_create_blk_controller", 00:04:39.169 "vhost_scsi_controller_remove_target", 00:04:39.169 "vhost_scsi_controller_add_target", 00:04:39.169 "vhost_start_scsi_controller", 00:04:39.169 "vhost_create_scsi_controller", 00:04:39.169 "thread_set_cpumask", 00:04:39.169 "scheduler_set_options", 00:04:39.169 "framework_get_governor", 00:04:39.169 "framework_get_scheduler", 00:04:39.169 "framework_set_scheduler", 00:04:39.169 "framework_get_reactors", 00:04:39.169 "thread_get_io_channels", 00:04:39.169 "thread_get_pollers", 00:04:39.169 "thread_get_stats", 00:04:39.169 "framework_monitor_context_switch", 00:04:39.169 "spdk_kill_instance", 00:04:39.169 "log_enable_timestamps", 00:04:39.169 "log_get_flags", 00:04:39.169 "log_clear_flag", 00:04:39.169 "log_set_flag", 00:04:39.169 "log_get_level", 00:04:39.169 "log_set_level", 00:04:39.169 "log_get_print_level", 00:04:39.169 "log_set_print_level", 00:04:39.169 "framework_enable_cpumask_locks", 00:04:39.169 "framework_disable_cpumask_locks", 00:04:39.169 "framework_wait_init", 00:04:39.169 "framework_start_init", 00:04:39.169 "scsi_get_devices", 00:04:39.169 "bdev_get_histogram", 00:04:39.169 "bdev_enable_histogram", 00:04:39.169 "bdev_set_qos_limit", 00:04:39.169 "bdev_set_qd_sampling_period", 00:04:39.169 "bdev_get_bdevs", 00:04:39.169 "bdev_reset_iostat", 00:04:39.169 "bdev_get_iostat", 00:04:39.169 "bdev_examine", 00:04:39.169 "bdev_wait_for_examine", 00:04:39.169 "bdev_set_options", 00:04:39.169 "accel_get_stats", 00:04:39.169 "accel_set_options", 00:04:39.169 "accel_set_driver", 00:04:39.169 "accel_crypto_key_destroy", 00:04:39.169 "accel_crypto_keys_get", 00:04:39.169 "accel_crypto_key_create", 00:04:39.169 "accel_assign_opc", 00:04:39.169 "accel_get_module_info", 00:04:39.169 "accel_get_opc_assignments", 00:04:39.169 "vmd_rescan", 00:04:39.169 "vmd_remove_device", 00:04:39.169 "vmd_enable", 00:04:39.169 "sock_get_default_impl", 00:04:39.169 "sock_set_default_impl", 00:04:39.169 "sock_impl_set_options", 00:04:39.169 "sock_impl_get_options", 00:04:39.169 "iobuf_get_stats", 00:04:39.169 "iobuf_set_options", 00:04:39.169 "keyring_get_keys", 00:04:39.169 "framework_get_pci_devices", 00:04:39.169 "framework_get_config", 00:04:39.169 "framework_get_subsystems", 00:04:39.169 "fsdev_set_opts", 00:04:39.169 "fsdev_get_opts", 00:04:39.169 "trace_get_info", 00:04:39.169 "trace_get_tpoint_group_mask", 00:04:39.169 "trace_disable_tpoint_group", 00:04:39.169 "trace_enable_tpoint_group", 00:04:39.169 "trace_clear_tpoint_mask", 00:04:39.169 "trace_set_tpoint_mask", 00:04:39.169 "notify_get_notifications", 00:04:39.169 "notify_get_types", 00:04:39.169 "spdk_get_version", 00:04:39.169 "rpc_get_methods" 00:04:39.169 ] 00:04:39.169 09:04:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.169 09:04:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:39.169 09:04:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2823383 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2823383 ']' 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2823383 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.169 09:04:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823383 00:04:39.169 09:04:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.169 09:04:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.169 09:04:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823383' 00:04:39.169 killing process with pid 2823383 00:04:39.169 09:04:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2823383 00:04:39.169 09:04:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2823383 00:04:41.697 00:04:41.697 real 0m4.200s 00:04:41.697 user 0m7.691s 00:04:41.697 sys 0m0.685s 00:04:41.697 09:04:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.697 09:04:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.697 ************************************ 00:04:41.697 END TEST spdkcli_tcp 00:04:41.697 ************************************ 00:04:41.697 09:04:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.697 09:04:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.697 09:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.697 09:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:41.697 ************************************ 00:04:41.697 START TEST dpdk_mem_utility 00:04:41.698 ************************************ 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.698 * Looking for test storage... 00:04:41.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.698 09:04:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.698 --rc genhtml_branch_coverage=1 00:04:41.698 --rc genhtml_function_coverage=1 00:04:41.698 --rc genhtml_legend=1 00:04:41.698 --rc geninfo_all_blocks=1 00:04:41.698 --rc geninfo_unexecuted_blocks=1 00:04:41.698 00:04:41.698 ' 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.698 --rc genhtml_branch_coverage=1 00:04:41.698 --rc genhtml_function_coverage=1 00:04:41.698 --rc genhtml_legend=1 00:04:41.698 --rc geninfo_all_blocks=1 00:04:41.698 --rc geninfo_unexecuted_blocks=1 00:04:41.698 00:04:41.698 ' 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.698 --rc genhtml_branch_coverage=1 00:04:41.698 --rc genhtml_function_coverage=1 00:04:41.698 --rc genhtml_legend=1 00:04:41.698 --rc geninfo_all_blocks=1 00:04:41.698 --rc geninfo_unexecuted_blocks=1 00:04:41.698 00:04:41.698 ' 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.698 --rc genhtml_branch_coverage=1 00:04:41.698 --rc genhtml_function_coverage=1 00:04:41.698 --rc genhtml_legend=1 00:04:41.698 --rc geninfo_all_blocks=1 00:04:41.698 --rc geninfo_unexecuted_blocks=1 00:04:41.698 00:04:41.698 ' 00:04:41.698 09:04:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.698 09:04:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2823984 00:04:41.698 09:04:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.698 09:04:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2823984 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2823984 ']' 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.698 09:04:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.956 [2024-11-17 09:04:46.717287] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:41.956 [2024-11-17 09:04:46.717468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823984 ] 00:04:41.956 [2024-11-17 09:04:46.862201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.214 [2024-11-17 09:04:47.001701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.147 09:04:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.147 09:04:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:43.147 09:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.147 09:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.147 09:04:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.147 09:04:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.147 { 00:04:43.147 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.147 } 00:04:43.147 09:04:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.147 09:04:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.147 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:43.147 1 heaps totaling size 816.000000 MiB 00:04:43.147 size: 816.000000 MiB heap id: 0 00:04:43.147 end heaps---------- 00:04:43.147 9 mempools totaling size 595.772034 MiB 00:04:43.147 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.147 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.147 size: 92.545471 MiB name: bdev_io_2823984 00:04:43.147 size: 50.003479 MiB name: msgpool_2823984 00:04:43.147 size: 36.509338 MiB name: fsdev_io_2823984 00:04:43.147 size: 21.763794 MiB name: PDU_Pool 00:04:43.147 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.147 size: 4.133484 MiB name: evtpool_2823984 00:04:43.147 size: 0.026123 MiB name: Session_Pool 00:04:43.147 end mempools------- 00:04:43.147 6 memzones totaling size 4.142822 MiB 00:04:43.147 size: 1.000366 MiB name: RG_ring_0_2823984 00:04:43.147 size: 1.000366 MiB name: RG_ring_1_2823984 00:04:43.147 size: 1.000366 MiB name: RG_ring_4_2823984 00:04:43.147 size: 1.000366 MiB name: RG_ring_5_2823984 00:04:43.147 size: 0.125366 MiB name: RG_ring_2_2823984 00:04:43.147 size: 0.015991 MiB name: RG_ring_3_2823984 00:04:43.147 end memzones------- 00:04:43.147 09:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.147 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:04:43.147 list of free elements. size: 16.857605 MiB 00:04:43.147 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:43.147 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:43.147 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:43.147 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:43.147 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:43.147 element at address: 0x200019200000 with size: 0.999329 MiB 00:04:43.147 element at address: 0x200000400000 with size: 0.998108 MiB 00:04:43.147 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:43.147 element at address: 0x200018a00000 with size: 0.959900 MiB 00:04:43.147 element at address: 0x200019500040 with size: 0.937256 MiB 00:04:43.147 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:43.147 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:04:43.147 element at address: 0x200000c00000 with size: 0.495300 MiB 00:04:43.147 element at address: 0x200018e00000 with size: 0.491150 MiB 00:04:43.147 element at address: 0x200019600000 with size: 0.485657 MiB 00:04:43.147 element at address: 0x200012c00000 with size: 0.446167 MiB 00:04:43.147 element at address: 0x200028000000 with size: 0.411072 MiB 00:04:43.147 element at address: 0x200000800000 with size: 0.355286 MiB 00:04:43.147 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:04:43.147 list of standard malloc elements. size: 199.221497 MiB 00:04:43.147 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:43.147 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:43.147 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:43.147 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:43.147 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:43.147 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:43.147 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:43.147 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:43.147 element at address: 0x200012bff040 with size: 0.000427 MiB 00:04:43.147 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:04:43.147 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:43.147 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:43.147 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200012bff200 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200012bff300 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200012bff400 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200012bff500 with size: 0.000244 MiB 00:04:43.147 element at address: 0x200012bff600 with size: 0.000244 MiB 00:04:43.148 element at address: 0x200012bff700 with size: 0.000244 MiB 00:04:43.148 element at address: 0x200012bff800 with size: 0.000244 MiB 00:04:43.148 element at address: 0x200012bff900 with size: 0.000244 MiB 00:04:43.148 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:43.148 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:43.148 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:43.148 list of memzone associated elements. size: 599.920898 MiB 00:04:43.148 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:43.148 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.148 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:43.148 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.148 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:43.148 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2823984_0 00:04:43.148 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:43.148 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2823984_0 00:04:43.148 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:43.148 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2823984_0 00:04:43.148 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:43.148 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.148 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:43.148 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.148 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:43.148 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2823984_0 00:04:43.148 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:43.148 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2823984 00:04:43.148 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:43.148 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2823984 00:04:43.148 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:43.148 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.148 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:43.148 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.148 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:43.148 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.148 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:43.148 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.148 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:43.148 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2823984 00:04:43.148 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:43.148 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2823984 00:04:43.148 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:43.148 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2823984 00:04:43.148 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:43.148 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2823984 00:04:43.148 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:43.148 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2823984 00:04:43.148 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:43.148 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2823984 00:04:43.148 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:04:43.148 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.148 element at address: 0x200012c72380 with size: 0.500549 MiB 00:04:43.148 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.148 element at address: 0x20001967c540 with size: 0.250549 MiB 00:04:43.148 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.148 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:43.148 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2823984 00:04:43.148 element at address: 0x20000085f180 with size: 0.125549 MiB 00:04:43.148 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2823984 00:04:43.148 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:04:43.148 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.148 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:04:43.148 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.148 element at address: 0x20000085af40 with size: 0.016174 MiB 00:04:43.148 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2823984 00:04:43.148 element at address: 0x20002806f540 with size: 0.002502 MiB 00:04:43.148 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.148 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:04:43.148 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2823984 00:04:43.148 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:43.148 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2823984 00:04:43.148 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:43.148 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2823984 00:04:43.148 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:04:43.148 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.148 09:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.148 09:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2823984 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2823984 ']' 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2823984 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823984 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823984' 00:04:43.148 killing process with pid 2823984 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2823984 00:04:43.148 09:04:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2823984 00:04:45.677 00:04:45.677 real 0m4.068s 00:04:45.677 user 0m4.062s 00:04:45.677 sys 0m0.654s 00:04:45.677 09:04:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.677 09:04:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.677 ************************************ 00:04:45.677 END TEST dpdk_mem_utility 00:04:45.677 ************************************ 00:04:45.677 09:04:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.677 09:04:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.677 09:04:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.677 09:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:45.677 ************************************ 00:04:45.677 START TEST event 00:04:45.677 ************************************ 00:04:45.677 09:04:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.677 * Looking for test storage... 00:04:45.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:45.677 09:04:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.677 09:04:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.677 09:04:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.935 09:04:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.935 09:04:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.935 09:04:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.935 09:04:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.935 09:04:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.935 09:04:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.935 09:04:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.935 09:04:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.935 09:04:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.935 09:04:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.935 09:04:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.935 09:04:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.935 09:04:50 event -- scripts/common.sh@344 -- # case "$op" in 00:04:45.935 09:04:50 event -- scripts/common.sh@345 -- # : 1 00:04:45.935 09:04:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.935 09:04:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.935 09:04:50 event -- scripts/common.sh@365 -- # decimal 1 00:04:45.935 09:04:50 event -- scripts/common.sh@353 -- # local d=1 00:04:45.935 09:04:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.935 09:04:50 event -- scripts/common.sh@355 -- # echo 1 00:04:45.935 09:04:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.935 09:04:50 event -- scripts/common.sh@366 -- # decimal 2 00:04:45.935 09:04:50 event -- scripts/common.sh@353 -- # local d=2 00:04:45.935 09:04:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.935 09:04:50 event -- scripts/common.sh@355 -- # echo 2 00:04:45.935 09:04:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.935 09:04:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.935 09:04:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.935 09:04:50 event -- scripts/common.sh@368 -- # return 0 00:04:45.935 09:04:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.935 09:04:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.935 --rc genhtml_branch_coverage=1 00:04:45.935 --rc genhtml_function_coverage=1 00:04:45.935 --rc genhtml_legend=1 00:04:45.935 --rc geninfo_all_blocks=1 00:04:45.935 --rc geninfo_unexecuted_blocks=1 00:04:45.936 00:04:45.936 ' 00:04:45.936 09:04:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.936 --rc genhtml_branch_coverage=1 00:04:45.936 --rc genhtml_function_coverage=1 00:04:45.936 --rc genhtml_legend=1 00:04:45.936 --rc geninfo_all_blocks=1 00:04:45.936 --rc geninfo_unexecuted_blocks=1 00:04:45.936 00:04:45.936 ' 00:04:45.936 09:04:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.936 --rc genhtml_branch_coverage=1 00:04:45.936 --rc genhtml_function_coverage=1 00:04:45.936 --rc genhtml_legend=1 00:04:45.936 --rc geninfo_all_blocks=1 00:04:45.936 --rc geninfo_unexecuted_blocks=1 00:04:45.936 00:04:45.936 ' 00:04:45.936 09:04:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.936 --rc genhtml_branch_coverage=1 00:04:45.936 --rc genhtml_function_coverage=1 00:04:45.936 --rc genhtml_legend=1 00:04:45.936 --rc geninfo_all_blocks=1 00:04:45.936 --rc geninfo_unexecuted_blocks=1 00:04:45.936 00:04:45.936 ' 00:04:45.936 09:04:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.936 09:04:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.936 09:04:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.936 09:04:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:45.936 09:04:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.936 09:04:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.936 ************************************ 00:04:45.936 START TEST event_perf 00:04:45.936 ************************************ 00:04:45.936 09:04:50 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.936 Running I/O for 1 seconds...[2024-11-17 09:04:50.791489] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:45.936 [2024-11-17 09:04:50.791604] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824465 ] 00:04:45.936 [2024-11-17 09:04:50.931784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:46.194 [2024-11-17 09:04:51.078603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.194 [2024-11-17 09:04:51.078672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.194 [2024-11-17 09:04:51.078759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.194 [2024-11-17 09:04:51.078774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.567 Running I/O for 1 seconds... 00:04:47.567 lcore 0: 229737 00:04:47.567 lcore 1: 229737 00:04:47.567 lcore 2: 229736 00:04:47.567 lcore 3: 229737 00:04:47.567 done. 00:04:47.567 00:04:47.567 real 0m1.591s 00:04:47.567 user 0m4.430s 00:04:47.567 sys 0m0.146s 00:04:47.567 09:04:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.567 09:04:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 ************************************ 00:04:47.567 END TEST event_perf 00:04:47.567 ************************************ 00:04:47.567 09:04:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:47.567 09:04:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:47.567 09:04:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.567 09:04:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 ************************************ 00:04:47.567 START TEST event_reactor 00:04:47.567 ************************************ 00:04:47.567 09:04:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:47.567 [2024-11-17 09:04:52.434888] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:47.567 [2024-11-17 09:04:52.434997] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824742 ] 00:04:47.567 [2024-11-17 09:04:52.575422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.825 [2024-11-17 09:04:52.712275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.199 test_start 00:04:49.199 oneshot 00:04:49.199 tick 100 00:04:49.199 tick 100 00:04:49.199 tick 250 00:04:49.199 tick 100 00:04:49.199 tick 100 00:04:49.199 tick 250 00:04:49.199 tick 100 00:04:49.199 tick 500 00:04:49.199 tick 100 00:04:49.199 tick 100 00:04:49.199 tick 250 00:04:49.199 tick 100 00:04:49.199 tick 100 00:04:49.199 test_end 00:04:49.199 00:04:49.199 real 0m1.569s 00:04:49.199 user 0m1.410s 00:04:49.199 sys 0m0.150s 00:04:49.199 09:04:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.199 09:04:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:49.199 ************************************ 00:04:49.199 END TEST event_reactor 00:04:49.199 ************************************ 00:04:49.199 09:04:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.199 09:04:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:49.199 09:04:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.199 09:04:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.199 ************************************ 00:04:49.199 START TEST event_reactor_perf 00:04:49.199 ************************************ 00:04:49.199 09:04:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:49.199 [2024-11-17 09:04:54.050785] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:49.199 [2024-11-17 09:04:54.050893] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824913 ] 00:04:49.199 [2024-11-17 09:04:54.192515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.457 [2024-11-17 09:04:54.331627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.829 test_start 00:04:50.829 test_end 00:04:50.829 Performance: 266581 events per second 00:04:50.829 00:04:50.829 real 0m1.572s 00:04:50.829 user 0m1.418s 00:04:50.829 sys 0m0.145s 00:04:50.829 09:04:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.829 09:04:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.829 ************************************ 00:04:50.829 END TEST event_reactor_perf 00:04:50.829 ************************************ 00:04:50.829 09:04:55 event -- event/event.sh@49 -- # uname -s 00:04:50.829 09:04:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:50.829 09:04:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:50.829 09:04:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.829 09:04:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.829 09:04:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.829 ************************************ 00:04:50.829 START TEST event_scheduler 00:04:50.829 ************************************ 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:50.829 * Looking for test storage... 00:04:50.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.829 09:04:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.829 --rc genhtml_branch_coverage=1 00:04:50.829 --rc genhtml_function_coverage=1 00:04:50.829 --rc genhtml_legend=1 00:04:50.829 --rc geninfo_all_blocks=1 00:04:50.829 --rc geninfo_unexecuted_blocks=1 00:04:50.829 00:04:50.829 ' 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.829 --rc genhtml_branch_coverage=1 00:04:50.829 --rc genhtml_function_coverage=1 00:04:50.829 --rc genhtml_legend=1 00:04:50.829 --rc geninfo_all_blocks=1 00:04:50.829 --rc geninfo_unexecuted_blocks=1 00:04:50.829 00:04:50.829 ' 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.829 --rc genhtml_branch_coverage=1 00:04:50.829 --rc genhtml_function_coverage=1 00:04:50.829 --rc genhtml_legend=1 00:04:50.829 --rc geninfo_all_blocks=1 00:04:50.829 --rc geninfo_unexecuted_blocks=1 00:04:50.829 00:04:50.829 ' 00:04:50.829 09:04:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.830 --rc genhtml_branch_coverage=1 00:04:50.830 --rc genhtml_function_coverage=1 00:04:50.830 --rc genhtml_legend=1 00:04:50.830 --rc geninfo_all_blocks=1 00:04:50.830 --rc geninfo_unexecuted_blocks=1 00:04:50.830 00:04:50.830 ' 00:04:50.830 09:04:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:50.830 09:04:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2825215 00:04:50.830 09:04:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:50.830 09:04:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.830 09:04:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2825215 00:04:50.830 09:04:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2825215 ']' 00:04:50.830 09:04:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.830 09:04:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.830 09:04:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.830 09:04:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.830 09:04:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.088 [2024-11-17 09:04:55.857144] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:51.088 [2024-11-17 09:04:55.857289] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825215 ] 00:04:51.088 [2024-11-17 09:04:55.992292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.346 [2024-11-17 09:04:56.128645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.346 [2024-11-17 09:04:56.128690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.346 [2024-11-17 09:04:56.128723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.346 [2024-11-17 09:04:56.128731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:51.911 09:04:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.911 [2024-11-17 09:04:56.847828] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:51.911 [2024-11-17 09:04:56.847886] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:51.911 [2024-11-17 09:04:56.847919] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:51.911 [2024-11-17 09:04:56.847938] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:51.911 [2024-11-17 09:04:56.847965] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.911 09:04:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.911 09:04:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.169 [2024-11-17 09:04:57.158695] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:52.169 09:04:57 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.169 09:04:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:52.169 09:04:57 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.169 09:04:57 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.169 09:04:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 ************************************ 00:04:52.428 START TEST scheduler_create_thread 00:04:52.428 ************************************ 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 2 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 3 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 4 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 5 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 6 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 7 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 8 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 9 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 10 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.428 00:04:52.428 real 0m0.110s 00:04:52.428 user 0m0.011s 00:04:52.428 sys 0m0.003s 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.428 09:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.428 ************************************ 00:04:52.428 END TEST scheduler_create_thread 00:04:52.428 ************************************ 00:04:52.428 09:04:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.428 09:04:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2825215 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2825215 ']' 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2825215 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825215 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825215' 00:04:52.428 killing process with pid 2825215 00:04:52.428 09:04:57 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2825215 00:04:52.429 09:04:57 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2825215 00:04:52.994 [2024-11-17 09:04:57.781769] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.928 00:04:53.928 real 0m3.109s 00:04:53.928 user 0m5.483s 00:04:53.928 sys 0m0.492s 00:04:53.928 09:04:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.928 09:04:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.928 ************************************ 00:04:53.928 END TEST event_scheduler 00:04:53.928 ************************************ 00:04:53.928 09:04:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.928 09:04:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.928 09:04:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.928 09:04:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.928 09:04:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.928 ************************************ 00:04:53.928 START TEST app_repeat 00:04:53.928 ************************************ 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2825550 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2825550' 00:04:53.928 Process app_repeat pid: 2825550 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.928 spdk_app_start Round 0 00:04:53.928 09:04:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825550 /var/tmp/spdk-nbd.sock 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825550 ']' 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.928 09:04:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.928 [2024-11-17 09:04:58.857196] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:53.928 [2024-11-17 09:04:58.857350] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825550 ] 00:04:54.186 [2024-11-17 09:04:59.005723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.186 [2024-11-17 09:04:59.145755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.186 [2024-11-17 09:04:59.145761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.118 09:04:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.118 09:04:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:55.118 09:04:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.375 Malloc0 00:04:55.375 09:05:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.632 Malloc1 00:04:55.632 09:05:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.632 09:05:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.889 /dev/nbd0 00:04:55.889 09:05:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.889 09:05:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.889 1+0 records in 00:04:55.889 1+0 records out 00:04:55.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224177 s, 18.3 MB/s 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.889 09:05:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.889 09:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.889 09:05:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.889 09:05:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.454 /dev/nbd1 00:04:56.454 09:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.454 09:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.454 09:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.455 09:05:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.455 1+0 records in 00:04:56.455 1+0 records out 00:04:56.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227538 s, 18.0 MB/s 00:04:56.455 09:05:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.455 09:05:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.455 09:05:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.455 09:05:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.455 09:05:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.455 09:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.455 09:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.455 09:05:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.455 09:05:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.455 09:05:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.713 { 00:04:56.713 "nbd_device": "/dev/nbd0", 00:04:56.713 "bdev_name": "Malloc0" 00:04:56.713 }, 00:04:56.713 { 00:04:56.713 "nbd_device": "/dev/nbd1", 00:04:56.713 "bdev_name": "Malloc1" 00:04:56.713 } 00:04:56.713 ]' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.713 { 00:04:56.713 "nbd_device": "/dev/nbd0", 00:04:56.713 "bdev_name": "Malloc0" 00:04:56.713 }, 00:04:56.713 { 00:04:56.713 "nbd_device": "/dev/nbd1", 00:04:56.713 "bdev_name": "Malloc1" 00:04:56.713 } 00:04:56.713 ]' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.713 /dev/nbd1' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.713 /dev/nbd1' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.713 256+0 records in 00:04:56.713 256+0 records out 00:04:56.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509526 s, 206 MB/s 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.713 256+0 records in 00:04:56.713 256+0 records out 00:04:56.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024199 s, 43.3 MB/s 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.713 256+0 records in 00:04:56.713 256+0 records out 00:04:56.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285491 s, 36.7 MB/s 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.713 09:05:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.971 09:05:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.229 09:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.486 09:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.486 09:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.486 09:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.743 09:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.743 09:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.743 09:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.743 09:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.743 09:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.743 09:05:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.744 09:05:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.744 09:05:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.744 09:05:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.744 09:05:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.001 09:05:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.373 [2024-11-17 09:05:04.182880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.373 [2024-11-17 09:05:04.317828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.373 [2024-11-17 09:05:04.317831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.631 [2024-11-17 09:05:04.534020] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.631 [2024-11-17 09:05:04.534109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.001 09:05:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.001 09:05:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:01.001 spdk_app_start Round 1 00:05:01.001 09:05:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825550 /var/tmp/spdk-nbd.sock 00:05:01.001 09:05:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825550 ']' 00:05:01.001 09:05:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.001 09:05:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.001 09:05:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.001 09:05:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.001 09:05:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.258 09:05:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.258 09:05:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.258 09:05:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.824 Malloc0 00:05:01.824 09:05:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.082 Malloc1 00:05:02.082 09:05:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.082 09:05:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.340 /dev/nbd0 00:05:02.340 09:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.340 09:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.340 1+0 records in 00:05:02.340 1+0 records out 00:05:02.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218124 s, 18.8 MB/s 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.340 09:05:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.340 09:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.340 09:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.340 09:05:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.598 /dev/nbd1 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.598 1+0 records in 00:05:02.598 1+0 records out 00:05:02.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209241 s, 19.6 MB/s 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.598 09:05:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.598 09:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.856 09:05:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.856 { 00:05:02.856 "nbd_device": "/dev/nbd0", 00:05:02.856 "bdev_name": "Malloc0" 00:05:02.856 }, 00:05:02.856 { 00:05:02.856 "nbd_device": "/dev/nbd1", 00:05:02.856 "bdev_name": "Malloc1" 00:05:02.856 } 00:05:02.856 ]' 00:05:02.856 09:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.856 { 00:05:02.856 "nbd_device": "/dev/nbd0", 00:05:02.856 "bdev_name": "Malloc0" 00:05:02.856 }, 00:05:02.856 { 00:05:02.856 "nbd_device": "/dev/nbd1", 00:05:02.856 "bdev_name": "Malloc1" 00:05:02.856 } 00:05:02.856 ]' 00:05:02.856 09:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.113 /dev/nbd1' 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.113 /dev/nbd1' 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.113 256+0 records in 00:05:03.113 256+0 records out 00:05:03.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532431 s, 197 MB/s 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.113 256+0 records in 00:05:03.113 256+0 records out 00:05:03.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270679 s, 38.7 MB/s 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.113 256+0 records in 00:05:03.113 256+0 records out 00:05:03.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035553 s, 29.5 MB/s 00:05:03.113 09:05:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.114 09:05:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.372 09:05:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.629 09:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.887 09:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.887 09:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.887 09:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.888 09:05:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.888 09:05:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.454 09:05:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.826 [2024-11-17 09:05:10.541458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.826 [2024-11-17 09:05:10.677035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.826 [2024-11-17 09:05:10.677037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.083 [2024-11-17 09:05:10.889100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.083 [2024-11-17 09:05:10.889185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.476 09:05:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.476 09:05:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:07.476 spdk_app_start Round 2 00:05:07.476 09:05:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2825550 /var/tmp/spdk-nbd.sock 00:05:07.476 09:05:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825550 ']' 00:05:07.476 09:05:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.476 09:05:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.476 09:05:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.476 09:05:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.476 09:05:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.764 09:05:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.764 09:05:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.764 09:05:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.029 Malloc0 00:05:08.029 09:05:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.287 Malloc1 00:05:08.287 09:05:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.287 09:05:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.853 /dev/nbd0 00:05:08.853 09:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.853 09:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.853 1+0 records in 00:05:08.853 1+0 records out 00:05:08.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279345 s, 14.7 MB/s 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.853 09:05:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.853 09:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.853 09:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.853 09:05:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.111 /dev/nbd1 00:05:09.111 09:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.111 09:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.111 1+0 records in 00:05:09.111 1+0 records out 00:05:09.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212286 s, 19.3 MB/s 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.111 09:05:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.111 09:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.111 09:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.111 09:05:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.112 09:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.112 09:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.370 { 00:05:09.370 "nbd_device": "/dev/nbd0", 00:05:09.370 "bdev_name": "Malloc0" 00:05:09.370 }, 00:05:09.370 { 00:05:09.370 "nbd_device": "/dev/nbd1", 00:05:09.370 "bdev_name": "Malloc1" 00:05:09.370 } 00:05:09.370 ]' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.370 { 00:05:09.370 "nbd_device": "/dev/nbd0", 00:05:09.370 "bdev_name": "Malloc0" 00:05:09.370 }, 00:05:09.370 { 00:05:09.370 "nbd_device": "/dev/nbd1", 00:05:09.370 "bdev_name": "Malloc1" 00:05:09.370 } 00:05:09.370 ]' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.370 /dev/nbd1' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.370 /dev/nbd1' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.370 256+0 records in 00:05:09.370 256+0 records out 00:05:09.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385107 s, 272 MB/s 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.370 256+0 records in 00:05:09.370 256+0 records out 00:05:09.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027774 s, 37.8 MB/s 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.370 256+0 records in 00:05:09.370 256+0 records out 00:05:09.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292133 s, 35.9 MB/s 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.370 09:05:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.936 09:05:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.193 09:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.451 09:05:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.451 09:05:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.709 09:05:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.083 [2024-11-17 09:05:16.917502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.083 [2024-11-17 09:05:17.051804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.083 [2024-11-17 09:05:17.051807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.341 [2024-11-17 09:05:17.267205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.341 [2024-11-17 09:05:17.267306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.714 09:05:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2825550 /var/tmp/spdk-nbd.sock 00:05:13.714 09:05:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2825550 ']' 00:05:13.714 09:05:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.714 09:05:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.714 09:05:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.714 09:05:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.714 09:05:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.281 09:05:18 event.app_repeat -- event/event.sh@39 -- # killprocess 2825550 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2825550 ']' 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2825550 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.281 09:05:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825550 00:05:14.281 09:05:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.281 09:05:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.281 09:05:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825550' 00:05:14.281 killing process with pid 2825550 00:05:14.281 09:05:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2825550 00:05:14.281 09:05:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2825550 00:05:15.215 spdk_app_start is called in Round 0. 00:05:15.215 Shutdown signal received, stop current app iteration 00:05:15.215 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:15.215 spdk_app_start is called in Round 1. 00:05:15.215 Shutdown signal received, stop current app iteration 00:05:15.215 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:15.215 spdk_app_start is called in Round 2. 00:05:15.215 Shutdown signal received, stop current app iteration 00:05:15.215 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:15.215 spdk_app_start is called in Round 3. 00:05:15.215 Shutdown signal received, stop current app iteration 00:05:15.215 09:05:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.215 09:05:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:15.215 00:05:15.215 real 0m21.272s 00:05:15.215 user 0m45.343s 00:05:15.215 sys 0m3.338s 00:05:15.215 09:05:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.215 09:05:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.215 ************************************ 00:05:15.215 END TEST app_repeat 00:05:15.215 ************************************ 00:05:15.215 09:05:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.215 09:05:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.215 09:05:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.215 09:05:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.215 09:05:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.215 ************************************ 00:05:15.215 START TEST cpu_locks 00:05:15.215 ************************************ 00:05:15.215 09:05:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.215 * Looking for test storage... 00:05:15.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.215 09:05:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.215 09:05:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.215 09:05:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.474 09:05:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.474 --rc genhtml_branch_coverage=1 00:05:15.474 --rc genhtml_function_coverage=1 00:05:15.474 --rc genhtml_legend=1 00:05:15.474 --rc geninfo_all_blocks=1 00:05:15.474 --rc geninfo_unexecuted_blocks=1 00:05:15.474 00:05:15.474 ' 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.474 --rc genhtml_branch_coverage=1 00:05:15.474 --rc genhtml_function_coverage=1 00:05:15.474 --rc genhtml_legend=1 00:05:15.474 --rc geninfo_all_blocks=1 00:05:15.474 --rc geninfo_unexecuted_blocks=1 00:05:15.474 00:05:15.474 ' 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.474 --rc genhtml_branch_coverage=1 00:05:15.474 --rc genhtml_function_coverage=1 00:05:15.474 --rc genhtml_legend=1 00:05:15.474 --rc geninfo_all_blocks=1 00:05:15.474 --rc geninfo_unexecuted_blocks=1 00:05:15.474 00:05:15.474 ' 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.474 --rc genhtml_branch_coverage=1 00:05:15.474 --rc genhtml_function_coverage=1 00:05:15.474 --rc genhtml_legend=1 00:05:15.474 --rc geninfo_all_blocks=1 00:05:15.474 --rc geninfo_unexecuted_blocks=1 00:05:15.474 00:05:15.474 ' 00:05:15.474 09:05:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.474 09:05:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.474 09:05:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.474 09:05:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.474 09:05:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.474 ************************************ 00:05:15.474 START TEST default_locks 00:05:15.474 ************************************ 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2828322 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2828322 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828322 ']' 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.474 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.475 09:05:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.475 [2024-11-17 09:05:20.399888] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:15.475 [2024-11-17 09:05:20.400039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828322 ] 00:05:15.733 [2024-11-17 09:05:20.546583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.733 [2024-11-17 09:05:20.684294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.667 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.667 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:16.667 09:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2828322 00:05:16.667 09:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2828322 00:05:16.667 09:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.233 lslocks: write error 00:05:17.233 09:05:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2828322 00:05:17.233 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2828322 ']' 00:05:17.233 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2828322 00:05:17.233 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.233 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.233 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828322 00:05:17.234 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.234 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.234 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828322' 00:05:17.234 killing process with pid 2828322 00:05:17.234 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2828322 00:05:17.234 09:05:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2828322 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2828322 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2828322 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2828322 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2828322 ']' 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2828322) - No such process 00:05:19.763 ERROR: process (pid: 2828322) is no longer running 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.763 00:05:19.763 real 0m4.114s 00:05:19.763 user 0m4.116s 00:05:19.763 sys 0m0.757s 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.763 09:05:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 ************************************ 00:05:19.763 END TEST default_locks 00:05:19.763 ************************************ 00:05:19.763 09:05:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.763 09:05:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.763 09:05:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.763 09:05:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 ************************************ 00:05:19.763 START TEST default_locks_via_rpc 00:05:19.763 ************************************ 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2828875 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2828875 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2828875 ']' 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.763 09:05:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 [2024-11-17 09:05:24.558763] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:19.763 [2024-11-17 09:05:24.558927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828875 ] 00:05:19.763 [2024-11-17 09:05:24.699565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.021 [2024-11-17 09:05:24.834015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2828875 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2828875 00:05:20.955 09:05:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2828875 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2828875 ']' 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2828875 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828875 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828875' 00:05:21.214 killing process with pid 2828875 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2828875 00:05:21.214 09:05:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2828875 00:05:23.744 00:05:23.744 real 0m3.955s 00:05:23.744 user 0m4.013s 00:05:23.744 sys 0m0.694s 00:05:23.744 09:05:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.744 09:05:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.744 ************************************ 00:05:23.744 END TEST default_locks_via_rpc 00:05:23.744 ************************************ 00:05:23.744 09:05:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:23.744 09:05:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.744 09:05:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.744 09:05:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.744 ************************************ 00:05:23.744 START TEST non_locking_app_on_locked_coremask 00:05:23.744 ************************************ 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2829426 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2829426 /var/tmp/spdk.sock 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829426 ']' 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.744 09:05:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.744 [2024-11-17 09:05:28.557120] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:23.744 [2024-11-17 09:05:28.557286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829426 ] 00:05:23.744 [2024-11-17 09:05:28.690449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.003 [2024-11-17 09:05:28.820744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2829568 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2829568 /var/tmp/spdk2.sock 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829568 ']' 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.937 09:05:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.937 [2024-11-17 09:05:29.871843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:24.937 [2024-11-17 09:05:29.871994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829568 ] 00:05:25.195 [2024-11-17 09:05:30.084430] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.195 [2024-11-17 09:05:30.084514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.453 [2024-11-17 09:05:30.366767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.982 09:05:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.982 09:05:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.982 09:05:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2829426 00:05:27.982 09:05:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2829426 00:05:27.982 09:05:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.240 lslocks: write error 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2829426 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829426 ']' 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829426 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829426 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829426' 00:05:28.240 killing process with pid 2829426 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829426 00:05:28.240 09:05:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829426 00:05:33.505 09:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2829568 00:05:33.505 09:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829568 ']' 00:05:33.505 09:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829568 00:05:33.505 09:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.505 09:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.505 09:05:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829568 00:05:33.505 09:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.505 09:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.505 09:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829568' 00:05:33.505 killing process with pid 2829568 00:05:33.505 09:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829568 00:05:33.505 09:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829568 00:05:36.060 00:05:36.060 real 0m11.961s 00:05:36.060 user 0m12.317s 00:05:36.060 sys 0m1.490s 00:05:36.060 09:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.060 09:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.060 ************************************ 00:05:36.060 END TEST non_locking_app_on_locked_coremask 00:05:36.060 ************************************ 00:05:36.060 09:05:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.060 09:05:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.060 09:05:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.060 09:05:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.060 ************************************ 00:05:36.060 START TEST locking_app_on_unlocked_coremask 00:05:36.060 ************************************ 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2830802 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2830802 /var/tmp/spdk.sock 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2830802 ']' 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.060 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.061 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.061 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.061 09:05:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.061 [2024-11-17 09:05:40.566396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.061 [2024-11-17 09:05:40.566565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830802 ] 00:05:36.061 [2024-11-17 09:05:40.700563] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.061 [2024-11-17 09:05:40.700634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.061 [2024-11-17 09:05:40.833017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2830950 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2830950 /var/tmp/spdk2.sock 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2830950 ']' 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.995 09:05:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.995 [2024-11-17 09:05:41.898942] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.995 [2024-11-17 09:05:41.899099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830950 ] 00:05:37.253 [2024-11-17 09:05:42.120966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.511 [2024-11-17 09:05:42.399528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2830950 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2830950 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.039 lslocks: write error 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2830802 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2830802 ']' 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2830802 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830802 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830802' 00:05:40.039 killing process with pid 2830802 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2830802 00:05:40.039 09:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2830802 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2830950 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2830950 ']' 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2830950 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830950 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830950' 00:05:45.303 killing process with pid 2830950 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2830950 00:05:45.303 09:05:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2830950 00:05:47.834 00:05:47.834 real 0m11.877s 00:05:47.834 user 0m12.315s 00:05:47.834 sys 0m1.498s 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.834 ************************************ 00:05:47.834 END TEST locking_app_on_unlocked_coremask 00:05:47.834 ************************************ 00:05:47.834 09:05:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:47.834 09:05:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.834 09:05:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.834 09:05:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.834 ************************************ 00:05:47.834 START TEST locking_app_on_locked_coremask 00:05:47.834 ************************************ 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2832300 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2832300 /var/tmp/spdk.sock 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832300 ']' 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.834 09:05:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.834 [2024-11-17 09:05:52.491631] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:47.834 [2024-11-17 09:05:52.491764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832300 ] 00:05:47.834 [2024-11-17 09:05:52.635030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.834 [2024-11-17 09:05:52.776189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2832439 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2832439 /var/tmp/spdk2.sock 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2832439 /var/tmp/spdk2.sock 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2832439 /var/tmp/spdk2.sock 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832439 ']' 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.770 09:05:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.062 [2024-11-17 09:05:53.852792] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:49.062 [2024-11-17 09:05:53.852939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832439 ] 00:05:49.352 [2024-11-17 09:05:54.063768] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2832300 has claimed it. 00:05:49.352 [2024-11-17 09:05:54.063858] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2832439) - No such process 00:05:49.610 ERROR: process (pid: 2832439) is no longer running 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2832300 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2832300 00:05:49.610 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.868 lslocks: write error 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2832300 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2832300 ']' 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2832300 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832300 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832300' 00:05:49.868 killing process with pid 2832300 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2832300 00:05:49.868 09:05:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2832300 00:05:52.398 00:05:52.398 real 0m4.938s 00:05:52.398 user 0m5.159s 00:05:52.398 sys 0m0.953s 00:05:52.398 09:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.398 09:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.398 ************************************ 00:05:52.398 END TEST locking_app_on_locked_coremask 00:05:52.398 ************************************ 00:05:52.398 09:05:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.398 09:05:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.398 09:05:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.398 09:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.398 ************************************ 00:05:52.398 START TEST locking_overlapped_coremask 00:05:52.398 ************************************ 00:05:52.398 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2832877 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2832877 /var/tmp/spdk.sock 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2832877 ']' 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.399 09:05:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.657 [2024-11-17 09:05:57.492737] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:52.657 [2024-11-17 09:05:57.492879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832877 ] 00:05:52.657 [2024-11-17 09:05:57.639167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.915 [2024-11-17 09:05:57.784870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.915 [2024-11-17 09:05:57.784925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.915 [2024-11-17 09:05:57.784931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2833015 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2833015 /var/tmp/spdk2.sock 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2833015 /var/tmp/spdk2.sock 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2833015 /var/tmp/spdk2.sock 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2833015 ']' 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.849 09:05:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.849 [2024-11-17 09:05:58.797233] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:53.849 [2024-11-17 09:05:58.797405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833015 ] 00:05:54.107 [2024-11-17 09:05:58.996802] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2832877 has claimed it. 00:05:54.107 [2024-11-17 09:05:58.996894] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2833015) - No such process 00:05:54.672 ERROR: process (pid: 2833015) is no longer running 00:05:54.672 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2832877 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2832877 ']' 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2832877 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832877 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832877' 00:05:54.673 killing process with pid 2832877 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2832877 00:05:54.673 09:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2832877 00:05:57.199 00:05:57.199 real 0m4.338s 00:05:57.199 user 0m11.819s 00:05:57.199 sys 0m0.765s 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.199 ************************************ 00:05:57.199 END TEST locking_overlapped_coremask 00:05:57.199 ************************************ 00:05:57.199 09:06:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.199 09:06:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.199 09:06:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.199 09:06:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.199 ************************************ 00:05:57.199 START TEST locking_overlapped_coremask_via_rpc 00:05:57.199 ************************************ 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2833548 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2833548 /var/tmp/spdk.sock 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833548 ']' 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.199 09:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.199 [2024-11-17 09:06:01.869463] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:57.200 [2024-11-17 09:06:01.869604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833548 ] 00:05:57.200 [2024-11-17 09:06:02.015249] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.200 [2024-11-17 09:06:02.015326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.200 [2024-11-17 09:06:02.162860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.200 [2024-11-17 09:06:02.162912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.200 [2024-11-17 09:06:02.162922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2833703 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2833703 /var/tmp/spdk2.sock 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833703 ']' 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.134 09:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.392 [2024-11-17 09:06:03.238409] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:58.392 [2024-11-17 09:06:03.238552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833703 ] 00:05:58.650 [2024-11-17 09:06:03.455010] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.650 [2024-11-17 09:06:03.455082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.909 [2024-11-17 09:06:03.752849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.909 [2024-11-17 09:06:03.756431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.909 [2024-11-17 09:06:03.756443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.437 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.438 09:06:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.438 [2024-11-17 09:06:05.999550] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2833548 has claimed it. 00:06:01.438 request: 00:06:01.438 { 00:06:01.438 "method": "framework_enable_cpumask_locks", 00:06:01.438 "req_id": 1 00:06:01.438 } 00:06:01.438 Got JSON-RPC error response 00:06:01.438 response: 00:06:01.438 { 00:06:01.438 "code": -32603, 00:06:01.438 "message": "Failed to claim CPU core: 2" 00:06:01.438 } 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2833548 /var/tmp/spdk.sock 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833548 ']' 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2833703 /var/tmp/spdk2.sock 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2833703 ']' 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.438 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.696 00:06:01.696 real 0m4.794s 00:06:01.696 user 0m1.660s 00:06:01.696 sys 0m0.289s 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.696 09:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.696 ************************************ 00:06:01.696 END TEST locking_overlapped_coremask_via_rpc 00:06:01.696 ************************************ 00:06:01.696 09:06:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.696 09:06:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833548 ]] 00:06:01.696 09:06:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833548 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833548 ']' 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833548 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833548 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833548' 00:06:01.696 killing process with pid 2833548 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2833548 00:06:01.696 09:06:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2833548 00:06:04.225 09:06:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2833703 ]] 00:06:04.225 09:06:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2833703 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833703 ']' 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833703 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833703 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833703' 00:06:04.225 killing process with pid 2833703 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2833703 00:06:04.225 09:06:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2833703 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2833548 ]] 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2833548 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833548 ']' 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833548 00:06:06.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2833548) - No such process 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2833548 is not found' 00:06:06.123 Process with pid 2833548 is not found 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2833703 ]] 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2833703 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2833703 ']' 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2833703 00:06:06.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2833703) - No such process 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2833703 is not found' 00:06:06.123 Process with pid 2833703 is not found 00:06:06.123 09:06:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.123 00:06:06.123 real 0m50.944s 00:06:06.123 user 1m27.093s 00:06:06.123 sys 0m7.766s 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.123 09:06:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.123 ************************************ 00:06:06.123 END TEST cpu_locks 00:06:06.123 ************************************ 00:06:06.123 00:06:06.123 real 1m20.512s 00:06:06.123 user 2m25.386s 00:06:06.123 sys 0m12.310s 00:06:06.123 09:06:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.123 09:06:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.123 ************************************ 00:06:06.123 END TEST event 00:06:06.123 ************************************ 00:06:06.123 09:06:11 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.123 09:06:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.123 09:06:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.123 09:06:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.381 ************************************ 00:06:06.381 START TEST thread 00:06:06.381 ************************************ 00:06:06.381 09:06:11 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.381 * Looking for test storage... 00:06:06.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:06.381 09:06:11 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.381 09:06:11 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.381 09:06:11 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.381 09:06:11 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.381 09:06:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.381 09:06:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.381 09:06:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.381 09:06:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.381 09:06:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.381 09:06:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.381 09:06:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.381 09:06:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.381 09:06:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.381 09:06:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.381 09:06:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.381 09:06:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:06.381 09:06:11 thread -- scripts/common.sh@345 -- # : 1 00:06:06.381 09:06:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.381 09:06:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.381 09:06:11 thread -- scripts/common.sh@365 -- # decimal 1 00:06:06.381 09:06:11 thread -- scripts/common.sh@353 -- # local d=1 00:06:06.381 09:06:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.381 09:06:11 thread -- scripts/common.sh@355 -- # echo 1 00:06:06.381 09:06:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.381 09:06:11 thread -- scripts/common.sh@366 -- # decimal 2 00:06:06.381 09:06:11 thread -- scripts/common.sh@353 -- # local d=2 00:06:06.381 09:06:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.381 09:06:11 thread -- scripts/common.sh@355 -- # echo 2 00:06:06.381 09:06:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.381 09:06:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.381 09:06:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.382 09:06:11 thread -- scripts/common.sh@368 -- # return 0 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.382 --rc genhtml_branch_coverage=1 00:06:06.382 --rc genhtml_function_coverage=1 00:06:06.382 --rc genhtml_legend=1 00:06:06.382 --rc geninfo_all_blocks=1 00:06:06.382 --rc geninfo_unexecuted_blocks=1 00:06:06.382 00:06:06.382 ' 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.382 --rc genhtml_branch_coverage=1 00:06:06.382 --rc genhtml_function_coverage=1 00:06:06.382 --rc genhtml_legend=1 00:06:06.382 --rc geninfo_all_blocks=1 00:06:06.382 --rc geninfo_unexecuted_blocks=1 00:06:06.382 00:06:06.382 ' 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.382 --rc genhtml_branch_coverage=1 00:06:06.382 --rc genhtml_function_coverage=1 00:06:06.382 --rc genhtml_legend=1 00:06:06.382 --rc geninfo_all_blocks=1 00:06:06.382 --rc geninfo_unexecuted_blocks=1 00:06:06.382 00:06:06.382 ' 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.382 --rc genhtml_branch_coverage=1 00:06:06.382 --rc genhtml_function_coverage=1 00:06:06.382 --rc genhtml_legend=1 00:06:06.382 --rc geninfo_all_blocks=1 00:06:06.382 --rc geninfo_unexecuted_blocks=1 00:06:06.382 00:06:06.382 ' 00:06:06.382 09:06:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.382 09:06:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.382 ************************************ 00:06:06.382 START TEST thread_poller_perf 00:06:06.382 ************************************ 00:06:06.382 09:06:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.382 [2024-11-17 09:06:11.350544] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:06.382 [2024-11-17 09:06:11.350692] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835260 ] 00:06:06.640 [2024-11-17 09:06:11.494814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.640 [2024-11-17 09:06:11.633355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.640 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.015 [2024-11-17T08:06:13.028Z] ====================================== 00:06:08.015 [2024-11-17T08:06:13.028Z] busy:2714249815 (cyc) 00:06:08.015 [2024-11-17T08:06:13.028Z] total_run_count: 282000 00:06:08.015 [2024-11-17T08:06:13.028Z] tsc_hz: 2700000000 (cyc) 00:06:08.015 [2024-11-17T08:06:13.028Z] ====================================== 00:06:08.015 [2024-11-17T08:06:13.028Z] poller_cost: 9624 (cyc), 3564 (nsec) 00:06:08.015 00:06:08.015 real 0m1.588s 00:06:08.015 user 0m1.425s 00:06:08.015 sys 0m0.154s 00:06:08.015 09:06:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.015 09:06:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.015 ************************************ 00:06:08.015 END TEST thread_poller_perf 00:06:08.015 ************************************ 00:06:08.015 09:06:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.015 09:06:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.015 09:06:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.015 09:06:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.015 ************************************ 00:06:08.015 START TEST thread_poller_perf 00:06:08.015 ************************************ 00:06:08.015 09:06:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.015 [2024-11-17 09:06:12.987593] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:08.015 [2024-11-17 09:06:12.987708] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835522 ] 00:06:08.274 [2024-11-17 09:06:13.130143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.274 [2024-11-17 09:06:13.265546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.274 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.647 [2024-11-17T08:06:14.660Z] ====================================== 00:06:09.647 [2024-11-17T08:06:14.660Z] busy:2704756340 (cyc) 00:06:09.647 [2024-11-17T08:06:14.660Z] total_run_count: 3631000 00:06:09.647 [2024-11-17T08:06:14.660Z] tsc_hz: 2700000000 (cyc) 00:06:09.647 [2024-11-17T08:06:14.660Z] ====================================== 00:06:09.647 [2024-11-17T08:06:14.660Z] poller_cost: 744 (cyc), 275 (nsec) 00:06:09.647 00:06:09.647 real 0m1.573s 00:06:09.647 user 0m1.416s 00:06:09.647 sys 0m0.149s 00:06:09.647 09:06:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.647 09:06:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.647 ************************************ 00:06:09.647 END TEST thread_poller_perf 00:06:09.647 ************************************ 00:06:09.647 09:06:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.647 00:06:09.647 real 0m3.402s 00:06:09.647 user 0m2.976s 00:06:09.647 sys 0m0.425s 00:06:09.647 09:06:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.647 09:06:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.647 ************************************ 00:06:09.647 END TEST thread 00:06:09.647 ************************************ 00:06:09.647 09:06:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:09.647 09:06:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.647 09:06:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.647 09:06:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.647 09:06:14 -- common/autotest_common.sh@10 -- # set +x 00:06:09.647 ************************************ 00:06:09.647 START TEST app_cmdline 00:06:09.647 ************************************ 00:06:09.647 09:06:14 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.647 * Looking for test storage... 00:06:09.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.647 09:06:14 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.647 09:06:14 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.647 09:06:14 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.906 09:06:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.906 --rc genhtml_branch_coverage=1 00:06:09.906 --rc genhtml_function_coverage=1 00:06:09.906 --rc genhtml_legend=1 00:06:09.906 --rc geninfo_all_blocks=1 00:06:09.906 --rc geninfo_unexecuted_blocks=1 00:06:09.906 00:06:09.906 ' 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.906 --rc genhtml_branch_coverage=1 00:06:09.906 --rc genhtml_function_coverage=1 00:06:09.906 --rc genhtml_legend=1 00:06:09.906 --rc geninfo_all_blocks=1 00:06:09.906 --rc geninfo_unexecuted_blocks=1 00:06:09.906 00:06:09.906 ' 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.906 --rc genhtml_branch_coverage=1 00:06:09.906 --rc genhtml_function_coverage=1 00:06:09.906 --rc genhtml_legend=1 00:06:09.906 --rc geninfo_all_blocks=1 00:06:09.906 --rc geninfo_unexecuted_blocks=1 00:06:09.906 00:06:09.906 ' 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.906 --rc genhtml_branch_coverage=1 00:06:09.906 --rc genhtml_function_coverage=1 00:06:09.906 --rc genhtml_legend=1 00:06:09.906 --rc geninfo_all_blocks=1 00:06:09.906 --rc geninfo_unexecuted_blocks=1 00:06:09.906 00:06:09.906 ' 00:06:09.906 09:06:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.906 09:06:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2835797 00:06:09.906 09:06:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.906 09:06:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2835797 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2835797 ']' 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.906 09:06:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.906 [2024-11-17 09:06:14.840519] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:09.906 [2024-11-17 09:06:14.840674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835797 ] 00:06:10.164 [2024-11-17 09:06:14.973774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.164 [2024-11-17 09:06:15.105526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.098 09:06:16 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.098 09:06:16 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:11.099 09:06:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:11.356 { 00:06:11.356 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:11.356 "fields": { 00:06:11.356 "major": 25, 00:06:11.356 "minor": 1, 00:06:11.356 "patch": 0, 00:06:11.356 "suffix": "-pre", 00:06:11.356 "commit": "83e8405e4" 00:06:11.356 } 00:06:11.356 } 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.614 09:06:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:11.614 09:06:16 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.872 request: 00:06:11.872 { 00:06:11.872 "method": "env_dpdk_get_mem_stats", 00:06:11.872 "req_id": 1 00:06:11.872 } 00:06:11.872 Got JSON-RPC error response 00:06:11.872 response: 00:06:11.872 { 00:06:11.872 "code": -32601, 00:06:11.872 "message": "Method not found" 00:06:11.872 } 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.872 09:06:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2835797 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2835797 ']' 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2835797 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835797 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835797' 00:06:11.872 killing process with pid 2835797 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 2835797 00:06:11.872 09:06:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 2835797 00:06:14.400 00:06:14.400 real 0m4.514s 00:06:14.400 user 0m4.984s 00:06:14.400 sys 0m0.672s 00:06:14.400 09:06:19 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.400 09:06:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.400 ************************************ 00:06:14.400 END TEST app_cmdline 00:06:14.400 ************************************ 00:06:14.400 09:06:19 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.400 09:06:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.400 09:06:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.400 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.400 ************************************ 00:06:14.400 START TEST version 00:06:14.400 ************************************ 00:06:14.400 09:06:19 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.400 * Looking for test storage... 00:06:14.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.400 09:06:19 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.400 09:06:19 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.400 09:06:19 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.400 09:06:19 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.400 09:06:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.400 09:06:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.400 09:06:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.400 09:06:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.400 09:06:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.400 09:06:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.400 09:06:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.400 09:06:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.400 09:06:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.400 09:06:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.400 09:06:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.401 09:06:19 version -- scripts/common.sh@344 -- # case "$op" in 00:06:14.401 09:06:19 version -- scripts/common.sh@345 -- # : 1 00:06:14.401 09:06:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.401 09:06:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.401 09:06:19 version -- scripts/common.sh@365 -- # decimal 1 00:06:14.401 09:06:19 version -- scripts/common.sh@353 -- # local d=1 00:06:14.401 09:06:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.401 09:06:19 version -- scripts/common.sh@355 -- # echo 1 00:06:14.401 09:06:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.401 09:06:19 version -- scripts/common.sh@366 -- # decimal 2 00:06:14.401 09:06:19 version -- scripts/common.sh@353 -- # local d=2 00:06:14.401 09:06:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.401 09:06:19 version -- scripts/common.sh@355 -- # echo 2 00:06:14.401 09:06:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.401 09:06:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.401 09:06:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.401 09:06:19 version -- scripts/common.sh@368 -- # return 0 00:06:14.401 09:06:19 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.401 09:06:19 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.401 --rc genhtml_branch_coverage=1 00:06:14.401 --rc genhtml_function_coverage=1 00:06:14.401 --rc genhtml_legend=1 00:06:14.401 --rc geninfo_all_blocks=1 00:06:14.401 --rc geninfo_unexecuted_blocks=1 00:06:14.401 00:06:14.401 ' 00:06:14.401 09:06:19 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.401 --rc genhtml_branch_coverage=1 00:06:14.401 --rc genhtml_function_coverage=1 00:06:14.401 --rc genhtml_legend=1 00:06:14.401 --rc geninfo_all_blocks=1 00:06:14.401 --rc geninfo_unexecuted_blocks=1 00:06:14.401 00:06:14.401 ' 00:06:14.401 09:06:19 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.401 --rc genhtml_branch_coverage=1 00:06:14.401 --rc genhtml_function_coverage=1 00:06:14.401 --rc genhtml_legend=1 00:06:14.401 --rc geninfo_all_blocks=1 00:06:14.401 --rc geninfo_unexecuted_blocks=1 00:06:14.401 00:06:14.401 ' 00:06:14.401 09:06:19 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.401 --rc genhtml_branch_coverage=1 00:06:14.401 --rc genhtml_function_coverage=1 00:06:14.401 --rc genhtml_legend=1 00:06:14.401 --rc geninfo_all_blocks=1 00:06:14.401 --rc geninfo_unexecuted_blocks=1 00:06:14.401 00:06:14.401 ' 00:06:14.401 09:06:19 version -- app/version.sh@17 -- # get_header_version major 00:06:14.401 09:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # cut -f2 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.401 09:06:19 version -- app/version.sh@17 -- # major=25 00:06:14.401 09:06:19 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.401 09:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # cut -f2 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.401 09:06:19 version -- app/version.sh@18 -- # minor=1 00:06:14.401 09:06:19 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.401 09:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # cut -f2 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.401 09:06:19 version -- app/version.sh@19 -- # patch=0 00:06:14.401 09:06:19 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.401 09:06:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # cut -f2 00:06:14.401 09:06:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.401 09:06:19 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.401 09:06:19 version -- app/version.sh@22 -- # version=25.1 00:06:14.401 09:06:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.401 09:06:19 version -- app/version.sh@28 -- # version=25.1rc0 00:06:14.401 09:06:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:14.401 09:06:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.401 09:06:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:14.401 09:06:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:14.401 00:06:14.401 real 0m0.204s 00:06:14.401 user 0m0.127s 00:06:14.401 sys 0m0.102s 00:06:14.401 09:06:19 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.401 09:06:19 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.401 ************************************ 00:06:14.401 END TEST version 00:06:14.401 ************************************ 00:06:14.401 09:06:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:14.401 09:06:19 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.401 09:06:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.401 09:06:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.401 09:06:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.401 09:06:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:14.401 09:06:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.401 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.401 09:06:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:14.401 09:06:19 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:14.401 09:06:19 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.401 09:06:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.401 09:06:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.401 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.659 ************************************ 00:06:14.659 START TEST nvmf_tcp 00:06:14.659 ************************************ 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.659 * Looking for test storage... 00:06:14.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.659 09:06:19 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.659 --rc genhtml_branch_coverage=1 00:06:14.659 --rc genhtml_function_coverage=1 00:06:14.659 --rc genhtml_legend=1 00:06:14.659 --rc geninfo_all_blocks=1 00:06:14.659 --rc geninfo_unexecuted_blocks=1 00:06:14.659 00:06:14.659 ' 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.659 --rc genhtml_branch_coverage=1 00:06:14.659 --rc genhtml_function_coverage=1 00:06:14.659 --rc genhtml_legend=1 00:06:14.659 --rc geninfo_all_blocks=1 00:06:14.659 --rc geninfo_unexecuted_blocks=1 00:06:14.659 00:06:14.659 ' 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.659 --rc genhtml_branch_coverage=1 00:06:14.659 --rc genhtml_function_coverage=1 00:06:14.659 --rc genhtml_legend=1 00:06:14.659 --rc geninfo_all_blocks=1 00:06:14.659 --rc geninfo_unexecuted_blocks=1 00:06:14.659 00:06:14.659 ' 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.659 --rc genhtml_branch_coverage=1 00:06:14.659 --rc genhtml_function_coverage=1 00:06:14.659 --rc genhtml_legend=1 00:06:14.659 --rc geninfo_all_blocks=1 00:06:14.659 --rc geninfo_unexecuted_blocks=1 00:06:14.659 00:06:14.659 ' 00:06:14.659 09:06:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:14.659 09:06:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.659 09:06:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.659 09:06:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.659 ************************************ 00:06:14.659 START TEST nvmf_target_core 00:06:14.659 ************************************ 00:06:14.659 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.659 * Looking for test storage... 00:06:14.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.659 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.659 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.659 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.918 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.919 --rc genhtml_branch_coverage=1 00:06:14.919 --rc genhtml_function_coverage=1 00:06:14.919 --rc genhtml_legend=1 00:06:14.919 --rc geninfo_all_blocks=1 00:06:14.919 --rc geninfo_unexecuted_blocks=1 00:06:14.919 00:06:14.919 ' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.919 --rc genhtml_branch_coverage=1 00:06:14.919 --rc genhtml_function_coverage=1 00:06:14.919 --rc genhtml_legend=1 00:06:14.919 --rc geninfo_all_blocks=1 00:06:14.919 --rc geninfo_unexecuted_blocks=1 00:06:14.919 00:06:14.919 ' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.919 --rc genhtml_branch_coverage=1 00:06:14.919 --rc genhtml_function_coverage=1 00:06:14.919 --rc genhtml_legend=1 00:06:14.919 --rc geninfo_all_blocks=1 00:06:14.919 --rc geninfo_unexecuted_blocks=1 00:06:14.919 00:06:14.919 ' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.919 --rc genhtml_branch_coverage=1 00:06:14.919 --rc genhtml_function_coverage=1 00:06:14.919 --rc genhtml_legend=1 00:06:14.919 --rc geninfo_all_blocks=1 00:06:14.919 --rc geninfo_unexecuted_blocks=1 00:06:14.919 00:06:14.919 ' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:14.919 ************************************ 00:06:14.919 START TEST nvmf_abort 00:06:14.919 ************************************ 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:14.919 * Looking for test storage... 00:06:14.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.919 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.920 --rc genhtml_branch_coverage=1 00:06:14.920 --rc genhtml_function_coverage=1 00:06:14.920 --rc genhtml_legend=1 00:06:14.920 --rc geninfo_all_blocks=1 00:06:14.920 --rc geninfo_unexecuted_blocks=1 00:06:14.920 00:06:14.920 ' 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.920 --rc genhtml_branch_coverage=1 00:06:14.920 --rc genhtml_function_coverage=1 00:06:14.920 --rc genhtml_legend=1 00:06:14.920 --rc geninfo_all_blocks=1 00:06:14.920 --rc geninfo_unexecuted_blocks=1 00:06:14.920 00:06:14.920 ' 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.920 --rc genhtml_branch_coverage=1 00:06:14.920 --rc genhtml_function_coverage=1 00:06:14.920 --rc genhtml_legend=1 00:06:14.920 --rc geninfo_all_blocks=1 00:06:14.920 --rc geninfo_unexecuted_blocks=1 00:06:14.920 00:06:14.920 ' 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.920 --rc genhtml_branch_coverage=1 00:06:14.920 --rc genhtml_function_coverage=1 00:06:14.920 --rc genhtml_legend=1 00:06:14.920 --rc geninfo_all_blocks=1 00:06:14.920 --rc geninfo_unexecuted_blocks=1 00:06:14.920 00:06:14.920 ' 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.920 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.179 09:06:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.081 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:17.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:17.082 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:17.082 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:17.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.082 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:17.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:06:17.340 00:06:17.340 --- 10.0.0.2 ping statistics --- 00:06:17.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.340 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:06:17.340 00:06:17.340 --- 10.0.0.1 ping statistics --- 00:06:17.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.340 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2838219 00:06:17.340 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2838219 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2838219 ']' 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.341 09:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.598 [2024-11-17 09:06:22.368535] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:17.598 [2024-11-17 09:06:22.368690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.598 [2024-11-17 09:06:22.516902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.856 [2024-11-17 09:06:22.653435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.856 [2024-11-17 09:06:22.653506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.856 [2024-11-17 09:06:22.653531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.856 [2024-11-17 09:06:22.653554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.856 [2024-11-17 09:06:22.653574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.856 [2024-11-17 09:06:22.656196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.856 [2024-11-17 09:06:22.656297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.856 [2024-11-17 09:06:22.656302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.425 [2024-11-17 09:06:23.365160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.425 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.685 Malloc0 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.685 Delay0 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.685 [2024-11-17 09:06:23.495188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.685 09:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:18.942 [2024-11-17 09:06:23.702503] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:20.842 Initializing NVMe Controllers 00:06:20.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:20.842 controller IO queue size 128 less than required 00:06:20.842 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:20.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:20.842 Initialization complete. Launching workers. 00:06:20.842 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22817 00:06:20.842 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22874, failed to submit 66 00:06:20.842 success 22817, unsuccessful 57, failed 0 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.842 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:20.842 rmmod nvme_tcp 00:06:20.842 rmmod nvme_fabrics 00:06:20.842 rmmod nvme_keyring 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2838219 ']' 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2838219 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2838219 ']' 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2838219 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838219 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838219' 00:06:21.100 killing process with pid 2838219 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2838219 00:06:21.100 09:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2838219 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.476 09:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.379 00:06:24.379 real 0m9.399s 00:06:24.379 user 0m15.568s 00:06:24.379 sys 0m2.787s 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.379 ************************************ 00:06:24.379 END TEST nvmf_abort 00:06:24.379 ************************************ 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.379 ************************************ 00:06:24.379 START TEST nvmf_ns_hotplug_stress 00:06:24.379 ************************************ 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:24.379 * Looking for test storage... 00:06:24.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.379 --rc genhtml_branch_coverage=1 00:06:24.379 --rc genhtml_function_coverage=1 00:06:24.379 --rc genhtml_legend=1 00:06:24.379 --rc geninfo_all_blocks=1 00:06:24.379 --rc geninfo_unexecuted_blocks=1 00:06:24.379 00:06:24.379 ' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.379 --rc genhtml_branch_coverage=1 00:06:24.379 --rc genhtml_function_coverage=1 00:06:24.379 --rc genhtml_legend=1 00:06:24.379 --rc geninfo_all_blocks=1 00:06:24.379 --rc geninfo_unexecuted_blocks=1 00:06:24.379 00:06:24.379 ' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.379 --rc genhtml_branch_coverage=1 00:06:24.379 --rc genhtml_function_coverage=1 00:06:24.379 --rc genhtml_legend=1 00:06:24.379 --rc geninfo_all_blocks=1 00:06:24.379 --rc geninfo_unexecuted_blocks=1 00:06:24.379 00:06:24.379 ' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.379 --rc genhtml_branch_coverage=1 00:06:24.379 --rc genhtml_function_coverage=1 00:06:24.379 --rc genhtml_legend=1 00:06:24.379 --rc geninfo_all_blocks=1 00:06:24.379 --rc geninfo_unexecuted_blocks=1 00:06:24.379 00:06:24.379 ' 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.379 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.380 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.380 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.380 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.380 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.638 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.639 09:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:26.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:26.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:26.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:26.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.632 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:06:26.633 00:06:26.633 --- 10.0.0.2 ping statistics --- 00:06:26.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.633 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:06:26.633 00:06:26.633 --- 10.0.0.1 ping statistics --- 00:06:26.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.633 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.633 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2840763 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2840763 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2840763 ']' 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.891 09:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:26.891 [2024-11-17 09:06:31.758336] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:26.891 [2024-11-17 09:06:31.758534] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.148 [2024-11-17 09:06:31.907628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.148 [2024-11-17 09:06:32.043067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.148 [2024-11-17 09:06:32.043158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.148 [2024-11-17 09:06:32.043189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.148 [2024-11-17 09:06:32.043217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.148 [2024-11-17 09:06:32.043238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.148 [2024-11-17 09:06:32.045973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.148 [2024-11-17 09:06:32.046068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.148 [2024-11-17 09:06:32.046087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.714 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.714 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:27.714 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:27.714 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.714 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.972 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.972 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:27.972 09:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:28.230 [2024-11-17 09:06:32.991604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.230 09:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.488 09:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.745 [2024-11-17 09:06:33.533589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.746 09:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:29.003 09:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:29.261 Malloc0 00:06:29.261 09:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:29.519 Delay0 00:06:29.519 09:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.777 09:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:30.034 NULL1 00:06:30.034 09:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:30.292 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2841270 00:06:30.292 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:30.292 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:30.292 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.550 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.808 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:30.808 09:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:31.067 true 00:06:31.067 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:31.067 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.325 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.583 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:31.583 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:31.841 true 00:06:31.841 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:31.841 09:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.099 09:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.664 09:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:32.664 09:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:32.664 true 00:06:32.664 09:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:32.664 09:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.038 Read completed with error (sct=0, sc=11) 00:06:34.038 09:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.038 09:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:34.038 09:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:34.296 true 00:06:34.296 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:34.296 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.553 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.811 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:34.811 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:35.069 true 00:06:35.069 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:35.069 09:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.326 09:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.585 09:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:35.585 09:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:35.841 true 00:06:35.841 09:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:35.841 09:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.774 09:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.033 09:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:37.033 09:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:37.291 true 00:06:37.291 09:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:37.291 09:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.548 09:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.806 09:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:37.806 09:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:38.064 true 00:06:38.064 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:38.064 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.322 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.887 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:38.887 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:38.887 true 00:06:38.887 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:38.887 09:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.821 09:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.079 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:40.079 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:40.336 true 00:06:40.336 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:40.337 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.595 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.161 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:41.161 09:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:41.161 true 00:06:41.161 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:41.161 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.726 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.726 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:41.726 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:41.984 true 00:06:41.984 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:41.984 09:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.918 09:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.484 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:43.484 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:43.484 true 00:06:43.484 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:43.484 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.743 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.001 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:44.001 09:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:44.257 true 00:06:44.257 09:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:44.257 09:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.189 09:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.447 09:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:45.447 09:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:45.705 true 00:06:45.705 09:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:45.705 09:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.963 09:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.222 09:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:46.222 09:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:46.480 true 00:06:46.480 09:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:46.480 09:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.103 09:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.103 09:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:47.103 09:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:47.362 true 00:06:47.362 09:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:47.362 09:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.295 09:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.553 09:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:48.553 09:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:49.118 true 00:06:49.118 09:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:49.118 09:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.376 09:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.635 09:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:49.635 09:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:49.892 true 00:06:49.892 09:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:49.892 09:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.150 09:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.408 09:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:50.408 09:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:50.666 true 00:06:50.666 09:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:50.666 09:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.599 09:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.858 09:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:51.858 09:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:52.116 true 00:06:52.116 09:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:52.116 09:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.374 09:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.634 09:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:52.634 09:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:52.892 true 00:06:52.892 09:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:52.892 09:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.823 09:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.081 09:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:54.081 09:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:54.081 true 00:06:54.338 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:54.338 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.595 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.853 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:54.853 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:55.143 true 00:06:55.143 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:55.143 09:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.427 09:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.684 09:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:55.684 09:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:55.942 true 00:06:55.942 09:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:55.942 09:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.875 09:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.133 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:57.133 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:57.391 true 00:06:57.391 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:57.391 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.648 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.906 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:57.906 09:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:58.163 true 00:06:58.163 09:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:58.163 09:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.095 09:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.352 09:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:59.352 09:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:59.610 true 00:06:59.610 09:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:06:59.610 09:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.868 09:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.127 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:00.127 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:00.387 true 00:07:00.387 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:07:00.387 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.646 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.646 Initializing NVMe Controllers 00:07:00.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:00.646 Controller IO queue size 128, less than required. 00:07:00.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.646 Controller IO queue size 128, less than required. 00:07:00.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:00.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:00.646 Initialization complete. Launching workers. 00:07:00.646 ======================================================== 00:07:00.646 Latency(us) 00:07:00.646 Device Information : IOPS MiB/s Average min max 00:07:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 449.51 0.22 107248.67 3312.95 1015406.93 00:07:00.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6143.47 3.00 20771.41 5210.85 491042.19 00:07:00.646 ======================================================== 00:07:00.646 Total : 6592.98 3.22 26667.42 3312.95 1015406.93 00:07:00.646 00:07:00.904 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:00.904 09:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:01.162 true 00:07:01.162 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2841270 00:07:01.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2841270) - No such process 00:07:01.162 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2841270 00:07:01.162 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.419 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.677 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:01.677 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:01.677 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:01.677 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:01.677 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:01.935 null0 00:07:01.935 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:01.935 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:01.935 09:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:02.193 null1 00:07:02.193 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.193 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.193 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:02.451 null2 00:07:02.451 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.451 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.451 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:02.708 null3 00:07:02.708 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.708 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.708 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:02.966 null4 00:07:02.966 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.966 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.966 09:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:03.224 null5 00:07:03.481 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.481 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.481 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:03.738 null6 00:07:03.738 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.738 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.738 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:03.997 null7 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.997 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2845345 2845346 2845348 2845350 2845352 2845354 2845356 2845358 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.998 09:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.256 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.513 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.770 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.027 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.028 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.286 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.543 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.543 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.543 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.543 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.543 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.543 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.544 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.801 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.802 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.060 09:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.317 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.575 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.834 09:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.399 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.657 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.916 09:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.174 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.431 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.689 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.947 09:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.512 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.770 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.029 rmmod nvme_tcp 00:07:10.029 rmmod nvme_fabrics 00:07:10.029 rmmod nvme_keyring 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2840763 ']' 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2840763 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2840763 ']' 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2840763 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840763 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840763' 00:07:10.029 killing process with pid 2840763 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2840763 00:07:10.029 09:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2840763 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.403 09:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.306 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.306 00:07:13.306 real 0m48.968s 00:07:13.307 user 3m44.916s 00:07:13.307 sys 0m16.210s 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.307 ************************************ 00:07:13.307 END TEST nvmf_ns_hotplug_stress 00:07:13.307 ************************************ 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.307 ************************************ 00:07:13.307 START TEST nvmf_delete_subsystem 00:07:13.307 ************************************ 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:13.307 * Looking for test storage... 00:07:13.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.307 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.567 --rc genhtml_branch_coverage=1 00:07:13.567 --rc genhtml_function_coverage=1 00:07:13.567 --rc genhtml_legend=1 00:07:13.567 --rc geninfo_all_blocks=1 00:07:13.567 --rc geninfo_unexecuted_blocks=1 00:07:13.567 00:07:13.567 ' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.567 --rc genhtml_branch_coverage=1 00:07:13.567 --rc genhtml_function_coverage=1 00:07:13.567 --rc genhtml_legend=1 00:07:13.567 --rc geninfo_all_blocks=1 00:07:13.567 --rc geninfo_unexecuted_blocks=1 00:07:13.567 00:07:13.567 ' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.567 --rc genhtml_branch_coverage=1 00:07:13.567 --rc genhtml_function_coverage=1 00:07:13.567 --rc genhtml_legend=1 00:07:13.567 --rc geninfo_all_blocks=1 00:07:13.567 --rc geninfo_unexecuted_blocks=1 00:07:13.567 00:07:13.567 ' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.567 --rc genhtml_branch_coverage=1 00:07:13.567 --rc genhtml_function_coverage=1 00:07:13.567 --rc genhtml_legend=1 00:07:13.567 --rc geninfo_all_blocks=1 00:07:13.567 --rc geninfo_unexecuted_blocks=1 00:07:13.567 00:07:13.567 ' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.567 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.568 09:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:15.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:15.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.469 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:15.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:15.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.470 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:07:15.728 00:07:15.728 --- 10.0.0.2 ping statistics --- 00:07:15.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.728 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:15.728 00:07:15.728 --- 10.0.0.1 ping statistics --- 00:07:15.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.728 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.728 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2848261 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2848261 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2848261 ']' 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.729 09:07:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.729 [2024-11-17 09:07:20.661803] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:15.729 [2024-11-17 09:07:20.661979] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.986 [2024-11-17 09:07:20.825586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.986 [2024-11-17 09:07:20.963857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.986 [2024-11-17 09:07:20.963962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.986 [2024-11-17 09:07:20.963989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.986 [2024-11-17 09:07:20.964013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.986 [2024-11-17 09:07:20.964033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.986 [2024-11-17 09:07:20.966740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.986 [2024-11-17 09:07:20.966742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 [2024-11-17 09:07:21.649166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 [2024-11-17 09:07:21.667042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 NULL1 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 Delay0 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2848417 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:16.921 09:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:16.921 [2024-11-17 09:07:21.801454] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:18.916 09:07:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.916 09:07:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.916 09:07:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 [2024-11-17 09:07:24.022245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.175 starting I/O failed: -6 00:07:19.175 Write completed with error (sct=0, sc=8) 00:07:19.175 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 starting I/O failed: -6 00:07:19.176 [2024-11-17 09:07:24.024172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Write completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 Read completed with error (sct=0, sc=8) 00:07:19.176 [2024-11-17 09:07:24.025129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:20.109 [2024-11-17 09:07:24.983819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 [2024-11-17 09:07:25.024582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Write completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.109 Read completed with error (sct=0, sc=8) 00:07:20.110 [2024-11-17 09:07:25.025189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 [2024-11-17 09:07:25.026058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 Write completed with error (sct=0, sc=8) 00:07:20.110 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.110 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:20.110 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848417 00:07:20.110 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:20.110 Read completed with error (sct=0, sc=8) 00:07:20.110 [2024-11-17 09:07:25.030885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:07:20.110 Initializing NVMe Controllers 00:07:20.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.110 Controller IO queue size 128, less than required. 00:07:20.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:20.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:20.110 Initialization complete. Launching workers. 00:07:20.110 ======================================================== 00:07:20.110 Latency(us) 00:07:20.110 Device Information : IOPS MiB/s Average min max 00:07:20.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.41 0.08 894511.11 866.65 1018051.17 00:07:20.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.92 0.08 894551.16 996.30 1017381.28 00:07:20.110 ======================================================== 00:07:20.110 Total : 342.33 0.17 894531.10 866.65 1018051.17 00:07:20.110 00:07:20.110 [2024-11-17 09:07:25.032509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:07:20.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:20.675 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:20.675 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2848417 00:07:20.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2848417) - No such process 00:07:20.675 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2848417 00:07:20.675 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2848417 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2848417 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.676 [2024-11-17 09:07:25.551921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2848944 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:20.676 09:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:20.676 [2024-11-17 09:07:25.668892] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.241 09:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.241 09:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:21.241 09:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.806 09:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.806 09:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:21.806 09:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.064 09:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.064 09:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:22.064 09:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.628 09:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.628 09:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:22.628 09:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.193 09:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.193 09:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:23.193 09:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.759 09:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.759 09:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:23.759 09:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.018 Initializing NVMe Controllers 00:07:24.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:24.018 Controller IO queue size 128, less than required. 00:07:24.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:24.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:24.018 Initialization complete. Launching workers. 00:07:24.018 ======================================================== 00:07:24.018 Latency(us) 00:07:24.018 Device Information : IOPS MiB/s Average min max 00:07:24.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005876.61 1000295.32 1043866.39 00:07:24.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005692.64 1000265.17 1043826.45 00:07:24.018 ======================================================== 00:07:24.018 Total : 256.00 0.12 1005784.62 1000265.17 1043866.39 00:07:24.018 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848944 00:07:24.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2848944) - No such process 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2848944 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.277 rmmod nvme_tcp 00:07:24.277 rmmod nvme_fabrics 00:07:24.277 rmmod nvme_keyring 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2848261 ']' 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2848261 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2848261 ']' 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2848261 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848261 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848261' 00:07:24.277 killing process with pid 2848261 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2848261 00:07:24.277 09:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2848261 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.651 09:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.556 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.557 00:07:27.557 real 0m14.150s 00:07:27.557 user 0m31.127s 00:07:27.557 sys 0m3.208s 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.557 ************************************ 00:07:27.557 END TEST nvmf_delete_subsystem 00:07:27.557 ************************************ 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.557 ************************************ 00:07:27.557 START TEST nvmf_host_management 00:07:27.557 ************************************ 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:27.557 * Looking for test storage... 00:07:27.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.557 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:27.815 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.816 --rc genhtml_branch_coverage=1 00:07:27.816 --rc genhtml_function_coverage=1 00:07:27.816 --rc genhtml_legend=1 00:07:27.816 --rc geninfo_all_blocks=1 00:07:27.816 --rc geninfo_unexecuted_blocks=1 00:07:27.816 00:07:27.816 ' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.816 --rc genhtml_branch_coverage=1 00:07:27.816 --rc genhtml_function_coverage=1 00:07:27.816 --rc genhtml_legend=1 00:07:27.816 --rc geninfo_all_blocks=1 00:07:27.816 --rc geninfo_unexecuted_blocks=1 00:07:27.816 00:07:27.816 ' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.816 --rc genhtml_branch_coverage=1 00:07:27.816 --rc genhtml_function_coverage=1 00:07:27.816 --rc genhtml_legend=1 00:07:27.816 --rc geninfo_all_blocks=1 00:07:27.816 --rc geninfo_unexecuted_blocks=1 00:07:27.816 00:07:27.816 ' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.816 --rc genhtml_branch_coverage=1 00:07:27.816 --rc genhtml_function_coverage=1 00:07:27.816 --rc genhtml_legend=1 00:07:27.816 --rc geninfo_all_blocks=1 00:07:27.816 --rc geninfo_unexecuted_blocks=1 00:07:27.816 00:07:27.816 ' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.816 09:07:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:29.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:29.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:29.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.717 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:29.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.718 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:07:29.977 00:07:29.977 --- 10.0.0.2 ping statistics --- 00:07:29.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.977 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:07:29.977 00:07:29.977 --- 10.0.0.1 ping statistics --- 00:07:29.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.977 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2851434 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2851434 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2851434 ']' 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.977 09:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:29.977 [2024-11-17 09:07:34.854344] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:29.977 [2024-11-17 09:07:34.854497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.235 [2024-11-17 09:07:35.009074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.235 [2024-11-17 09:07:35.152978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.235 [2024-11-17 09:07:35.153073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.235 [2024-11-17 09:07:35.153100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.235 [2024-11-17 09:07:35.153124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.235 [2024-11-17 09:07:35.153143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.235 [2024-11-17 09:07:35.156031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.235 [2024-11-17 09:07:35.156146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.235 [2024-11-17 09:07:35.156192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.235 [2024-11-17 09:07:35.156199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:30.800 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.800 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.059 [2024-11-17 09:07:35.841498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.059 Malloc0 00:07:31.059 [2024-11-17 09:07:35.978196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.059 09:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2851612 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2851612 /var/tmp/bdevperf.sock 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2851612 ']' 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:31.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.059 { 00:07:31.059 "params": { 00:07:31.059 "name": "Nvme$subsystem", 00:07:31.059 "trtype": "$TEST_TRANSPORT", 00:07:31.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.059 "adrfam": "ipv4", 00:07:31.059 "trsvcid": "$NVMF_PORT", 00:07:31.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.059 "hdgst": ${hdgst:-false}, 00:07:31.059 "ddgst": ${ddgst:-false} 00:07:31.059 }, 00:07:31.059 "method": "bdev_nvme_attach_controller" 00:07:31.059 } 00:07:31.059 EOF 00:07:31.059 )") 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:31.059 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.059 "params": { 00:07:31.059 "name": "Nvme0", 00:07:31.059 "trtype": "tcp", 00:07:31.059 "traddr": "10.0.0.2", 00:07:31.059 "adrfam": "ipv4", 00:07:31.059 "trsvcid": "4420", 00:07:31.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:31.059 "hdgst": false, 00:07:31.059 "ddgst": false 00:07:31.059 }, 00:07:31.059 "method": "bdev_nvme_attach_controller" 00:07:31.059 }' 00:07:31.317 [2024-11-17 09:07:36.101413] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:31.317 [2024-11-17 09:07:36.101554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851612 ] 00:07:31.317 [2024-11-17 09:07:36.245089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.575 [2024-11-17 09:07:36.373888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.832 Running I/O for 10 seconds... 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.090 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.350 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.350 [2024-11-17 09:07:37.114443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.114965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.114988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.115013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.115035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.115061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.115083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.115109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.115131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.115157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.115179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.115204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.115226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.350 [2024-11-17 09:07:37.115252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.350 [2024-11-17 09:07:37.115274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.115981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.351 [2024-11-17 09:07:37.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.351 [2024-11-17 09:07:37.116993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 [2024-11-17 09:07:37.117683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.352 [2024-11-17 09:07:37.117707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.352 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.352 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:32.352 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.352 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.352 [2024-11-17 09:07:37.119346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:32.352 task offset: 51200 on job bdev=Nvme0n1 fails 00:07:32.352 00:07:32.352 Latency(us) 00:07:32.352 [2024-11-17T08:07:37.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.352 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:32.352 Job: Nvme0n1 ended in about 0.31 seconds with error 00:07:32.352 Verification LBA range: start 0x0 length 0x400 00:07:32.352 Nvme0n1 : 0.31 1253.62 78.35 208.94 0.00 42094.64 4223.43 40583.77 00:07:32.352 [2024-11-17T08:07:37.365Z] =================================================================================================================== 00:07:32.352 [2024-11-17T08:07:37.365Z] Total : 1253.62 78.35 208.94 0.00 42094.64 4223.43 40583.77 00:07:32.352 [2024-11-17 09:07:37.124468] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.352 [2024-11-17 09:07:37.124520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:07:32.352 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.352 09:07:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:32.352 [2024-11-17 09:07:37.135904] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2851612 00:07:33.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2851612) - No such process 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.286 { 00:07:33.286 "params": { 00:07:33.286 "name": "Nvme$subsystem", 00:07:33.286 "trtype": "$TEST_TRANSPORT", 00:07:33.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.286 "adrfam": "ipv4", 00:07:33.286 "trsvcid": "$NVMF_PORT", 00:07:33.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.286 "hdgst": ${hdgst:-false}, 00:07:33.286 "ddgst": ${ddgst:-false} 00:07:33.286 }, 00:07:33.286 "method": "bdev_nvme_attach_controller" 00:07:33.286 } 00:07:33.286 EOF 00:07:33.286 )") 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:33.286 09:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.286 "params": { 00:07:33.286 "name": "Nvme0", 00:07:33.286 "trtype": "tcp", 00:07:33.286 "traddr": "10.0.0.2", 00:07:33.286 "adrfam": "ipv4", 00:07:33.286 "trsvcid": "4420", 00:07:33.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:33.286 "hdgst": false, 00:07:33.286 "ddgst": false 00:07:33.286 }, 00:07:33.286 "method": "bdev_nvme_attach_controller" 00:07:33.286 }' 00:07:33.286 [2024-11-17 09:07:38.214710] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:33.286 [2024-11-17 09:07:38.214847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851889 ] 00:07:33.544 [2024-11-17 09:07:38.354156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.544 [2024-11-17 09:07:38.484841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.110 Running I/O for 1 seconds... 00:07:35.043 1344.00 IOPS, 84.00 MiB/s 00:07:35.043 Latency(us) 00:07:35.043 [2024-11-17T08:07:40.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.043 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:35.043 Verification LBA range: start 0x0 length 0x400 00:07:35.043 Nvme0n1 : 1.01 1392.57 87.04 0.00 0.00 45167.00 8543.95 40001.23 00:07:35.043 [2024-11-17T08:07:40.056Z] =================================================================================================================== 00:07:35.043 [2024-11-17T08:07:40.056Z] Total : 1392.57 87.04 0.00 0.00 45167.00 8543.95 40001.23 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.975 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.976 rmmod nvme_tcp 00:07:35.976 rmmod nvme_fabrics 00:07:35.976 rmmod nvme_keyring 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2851434 ']' 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2851434 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2851434 ']' 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2851434 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851434 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851434' 00:07:35.976 killing process with pid 2851434 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2851434 00:07:35.976 09:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2851434 00:07:37.348 [2024-11-17 09:07:42.053629] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.348 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.349 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.349 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.349 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.252 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:39.252 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:39.252 00:07:39.252 real 0m11.765s 00:07:39.252 user 0m31.590s 00:07:39.252 sys 0m3.202s 00:07:39.252 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.252 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.252 ************************************ 00:07:39.252 END TEST nvmf_host_management 00:07:39.253 ************************************ 00:07:39.253 09:07:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:39.253 09:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.253 09:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.253 09:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.512 ************************************ 00:07:39.512 START TEST nvmf_lvol 00:07:39.512 ************************************ 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:39.512 * Looking for test storage... 00:07:39.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.512 --rc genhtml_branch_coverage=1 00:07:39.512 --rc genhtml_function_coverage=1 00:07:39.512 --rc genhtml_legend=1 00:07:39.512 --rc geninfo_all_blocks=1 00:07:39.512 --rc geninfo_unexecuted_blocks=1 00:07:39.512 00:07:39.512 ' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.512 --rc genhtml_branch_coverage=1 00:07:39.512 --rc genhtml_function_coverage=1 00:07:39.512 --rc genhtml_legend=1 00:07:39.512 --rc geninfo_all_blocks=1 00:07:39.512 --rc geninfo_unexecuted_blocks=1 00:07:39.512 00:07:39.512 ' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.512 --rc genhtml_branch_coverage=1 00:07:39.512 --rc genhtml_function_coverage=1 00:07:39.512 --rc genhtml_legend=1 00:07:39.512 --rc geninfo_all_blocks=1 00:07:39.512 --rc geninfo_unexecuted_blocks=1 00:07:39.512 00:07:39.512 ' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.512 --rc genhtml_branch_coverage=1 00:07:39.512 --rc genhtml_function_coverage=1 00:07:39.512 --rc genhtml_legend=1 00:07:39.512 --rc geninfo_all_blocks=1 00:07:39.512 --rc geninfo_unexecuted_blocks=1 00:07:39.512 00:07:39.512 ' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.512 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:39.513 09:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.043 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.043 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.043 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.043 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.043 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.043 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.044 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.044 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.044 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.044 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.044 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:42.045 00:07:42.045 --- 10.0.0.2 ping statistics --- 00:07:42.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.045 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:07:42.045 00:07:42.045 --- 10.0.0.1 ping statistics --- 00:07:42.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.045 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2854257 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2854257 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2854257 ']' 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.045 09:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.045 [2024-11-17 09:07:46.716006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:42.045 [2024-11-17 09:07:46.716147] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.045 [2024-11-17 09:07:46.874999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.045 [2024-11-17 09:07:47.016570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.045 [2024-11-17 09:07:47.016665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.045 [2024-11-17 09:07:47.016691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.045 [2024-11-17 09:07:47.016715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.045 [2024-11-17 09:07:47.016735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.045 [2024-11-17 09:07:47.019443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.045 [2024-11-17 09:07:47.019480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.045 [2024-11-17 09:07:47.019484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.980 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.980 [2024-11-17 09:07:47.969718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.237 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:43.496 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:43.496 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:44.062 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:44.062 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:44.062 09:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:44.628 09:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=177a5763-e207-4365-ac9b-a22cccd57ff5 00:07:44.628 09:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 177a5763-e207-4365-ac9b-a22cccd57ff5 lvol 20 00:07:44.628 09:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d0c2bc92-ae3a-44e6-a134-161ae7d1140d 00:07:44.628 09:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.885 09:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d0c2bc92-ae3a-44e6-a134-161ae7d1140d 00:07:45.451 09:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:45.451 [2024-11-17 09:07:50.445283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.708 09:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.967 09:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2854811 00:07:45.967 09:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:45.967 09:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:46.975 09:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d0c2bc92-ae3a-44e6-a134-161ae7d1140d MY_SNAPSHOT 00:07:47.233 09:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=678ddaa5-bd2b-4f52-b44f-01674844410b 00:07:47.233 09:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d0c2bc92-ae3a-44e6-a134-161ae7d1140d 30 00:07:47.799 09:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 678ddaa5-bd2b-4f52-b44f-01674844410b MY_CLONE 00:07:48.057 09:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=88cf50c2-8cd1-4ee2-81e9-62342402bee9 00:07:48.057 09:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 88cf50c2-8cd1-4ee2-81e9-62342402bee9 00:07:48.994 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2854811 00:07:57.104 Initializing NVMe Controllers 00:07:57.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:57.104 Controller IO queue size 128, less than required. 00:07:57.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:57.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:57.104 Initialization complete. Launching workers. 00:07:57.104 ======================================================== 00:07:57.104 Latency(us) 00:07:57.104 Device Information : IOPS MiB/s Average min max 00:07:57.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7973.20 31.15 16062.56 337.49 192693.82 00:07:57.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7936.70 31.00 16142.64 3362.62 149565.06 00:07:57.104 ======================================================== 00:07:57.104 Total : 15909.90 62.15 16102.51 337.49 192693.82 00:07:57.104 00:07:57.104 09:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:57.104 09:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d0c2bc92-ae3a-44e6-a134-161ae7d1140d 00:07:57.104 09:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 177a5763-e207-4365-ac9b-a22cccd57ff5 00:07:57.362 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.363 rmmod nvme_tcp 00:07:57.363 rmmod nvme_fabrics 00:07:57.363 rmmod nvme_keyring 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2854257 ']' 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2854257 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2854257 ']' 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2854257 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854257 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854257' 00:07:57.363 killing process with pid 2854257 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2854257 00:07:57.363 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2854257 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.736 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.272 00:08:01.272 real 0m21.404s 00:08:01.272 user 1m11.670s 00:08:01.272 sys 0m5.390s 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.272 ************************************ 00:08:01.272 END TEST nvmf_lvol 00:08:01.272 ************************************ 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.272 ************************************ 00:08:01.272 START TEST nvmf_lvs_grow 00:08:01.272 ************************************ 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:01.272 * Looking for test storage... 00:08:01.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.272 --rc genhtml_branch_coverage=1 00:08:01.272 --rc genhtml_function_coverage=1 00:08:01.272 --rc genhtml_legend=1 00:08:01.272 --rc geninfo_all_blocks=1 00:08:01.272 --rc geninfo_unexecuted_blocks=1 00:08:01.272 00:08:01.272 ' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.272 --rc genhtml_branch_coverage=1 00:08:01.272 --rc genhtml_function_coverage=1 00:08:01.272 --rc genhtml_legend=1 00:08:01.272 --rc geninfo_all_blocks=1 00:08:01.272 --rc geninfo_unexecuted_blocks=1 00:08:01.272 00:08:01.272 ' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.272 --rc genhtml_branch_coverage=1 00:08:01.272 --rc genhtml_function_coverage=1 00:08:01.272 --rc genhtml_legend=1 00:08:01.272 --rc geninfo_all_blocks=1 00:08:01.272 --rc geninfo_unexecuted_blocks=1 00:08:01.272 00:08:01.272 ' 00:08:01.272 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.273 --rc genhtml_branch_coverage=1 00:08:01.273 --rc genhtml_function_coverage=1 00:08:01.273 --rc genhtml_legend=1 00:08:01.273 --rc geninfo_all_blocks=1 00:08:01.273 --rc geninfo_unexecuted_blocks=1 00:08:01.273 00:08:01.273 ' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.273 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.176 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.177 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:08:03.177 00:08:03.177 --- 10.0.0.2 ping statistics --- 00:08:03.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.177 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:03.177 00:08:03.177 --- 10.0.0.1 ping statistics --- 00:08:03.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.177 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2858236 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2858236 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2858236 ']' 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.177 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.177 [2024-11-17 09:08:08.150118] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:03.177 [2024-11-17 09:08:08.150263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.435 [2024-11-17 09:08:08.312960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.693 [2024-11-17 09:08:08.453322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.693 [2024-11-17 09:08:08.453419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.693 [2024-11-17 09:08:08.453447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.693 [2024-11-17 09:08:08.453472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.693 [2024-11-17 09:08:08.453491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.693 [2024-11-17 09:08:08.455137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.259 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:04.517 [2024-11-17 09:08:09.455127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.517 ************************************ 00:08:04.517 START TEST lvs_grow_clean 00:08:04.517 ************************************ 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:04.517 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.083 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:05.083 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:05.341 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:05.341 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:05.341 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:05.600 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:05.600 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:05.600 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 lvol 150 00:08:05.858 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e2b6efa7-c4b3-42d1-8e03-bb9d325e4959 00:08:05.858 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:05.858 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:06.116 [2024-11-17 09:08:10.914337] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:06.116 [2024-11-17 09:08:10.914507] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:06.116 true 00:08:06.116 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:06.116 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:06.375 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:06.375 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.633 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2b6efa7-c4b3-42d1-8e03-bb9d325e4959 00:08:06.891 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.149 [2024-11-17 09:08:12.054074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.149 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2858807 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2858807 /var/tmp/bdevperf.sock 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2858807 ']' 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.408 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.666 [2024-11-17 09:08:12.430494] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:07.666 [2024-11-17 09:08:12.430641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858807 ] 00:08:07.666 [2024-11-17 09:08:12.572628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.924 [2024-11-17 09:08:12.709271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.490 09:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.490 09:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:08.490 09:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:09.055 Nvme0n1 00:08:09.056 09:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:09.314 [ 00:08:09.314 { 00:08:09.314 "name": "Nvme0n1", 00:08:09.314 "aliases": [ 00:08:09.314 "e2b6efa7-c4b3-42d1-8e03-bb9d325e4959" 00:08:09.314 ], 00:08:09.314 "product_name": "NVMe disk", 00:08:09.314 "block_size": 4096, 00:08:09.314 "num_blocks": 38912, 00:08:09.314 "uuid": "e2b6efa7-c4b3-42d1-8e03-bb9d325e4959", 00:08:09.314 "numa_id": 0, 00:08:09.314 "assigned_rate_limits": { 00:08:09.314 "rw_ios_per_sec": 0, 00:08:09.314 "rw_mbytes_per_sec": 0, 00:08:09.314 "r_mbytes_per_sec": 0, 00:08:09.314 "w_mbytes_per_sec": 0 00:08:09.314 }, 00:08:09.314 "claimed": false, 00:08:09.314 "zoned": false, 00:08:09.314 "supported_io_types": { 00:08:09.314 "read": true, 00:08:09.314 "write": true, 00:08:09.314 "unmap": true, 00:08:09.314 "flush": true, 00:08:09.314 "reset": true, 00:08:09.314 "nvme_admin": true, 00:08:09.314 "nvme_io": true, 00:08:09.314 "nvme_io_md": false, 00:08:09.314 "write_zeroes": true, 00:08:09.314 "zcopy": false, 00:08:09.314 "get_zone_info": false, 00:08:09.314 "zone_management": false, 00:08:09.314 "zone_append": false, 00:08:09.314 "compare": true, 00:08:09.314 "compare_and_write": true, 00:08:09.314 "abort": true, 00:08:09.314 "seek_hole": false, 00:08:09.314 "seek_data": false, 00:08:09.314 "copy": true, 00:08:09.314 "nvme_iov_md": false 00:08:09.314 }, 00:08:09.314 "memory_domains": [ 00:08:09.314 { 00:08:09.314 "dma_device_id": "system", 00:08:09.314 "dma_device_type": 1 00:08:09.314 } 00:08:09.314 ], 00:08:09.314 "driver_specific": { 00:08:09.314 "nvme": [ 00:08:09.314 { 00:08:09.314 "trid": { 00:08:09.314 "trtype": "TCP", 00:08:09.314 "adrfam": "IPv4", 00:08:09.314 "traddr": "10.0.0.2", 00:08:09.314 "trsvcid": "4420", 00:08:09.314 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:09.314 }, 00:08:09.314 "ctrlr_data": { 00:08:09.314 "cntlid": 1, 00:08:09.314 "vendor_id": "0x8086", 00:08:09.314 "model_number": "SPDK bdev Controller", 00:08:09.314 "serial_number": "SPDK0", 00:08:09.314 "firmware_revision": "25.01", 00:08:09.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.314 "oacs": { 00:08:09.314 "security": 0, 00:08:09.314 "format": 0, 00:08:09.314 "firmware": 0, 00:08:09.314 "ns_manage": 0 00:08:09.314 }, 00:08:09.314 "multi_ctrlr": true, 00:08:09.314 "ana_reporting": false 00:08:09.314 }, 00:08:09.314 "vs": { 00:08:09.314 "nvme_version": "1.3" 00:08:09.314 }, 00:08:09.314 "ns_data": { 00:08:09.314 "id": 1, 00:08:09.314 "can_share": true 00:08:09.314 } 00:08:09.314 } 00:08:09.314 ], 00:08:09.314 "mp_policy": "active_passive" 00:08:09.314 } 00:08:09.314 } 00:08:09.314 ] 00:08:09.314 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2859072 00:08:09.314 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:09.314 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.314 Running I/O for 10 seconds... 00:08:10.688 Latency(us) 00:08:10.688 [2024-11-17T08:08:15.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.688 Nvme0n1 : 1.00 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:08:10.688 [2024-11-17T08:08:15.701Z] =================================================================================================================== 00:08:10.688 [2024-11-17T08:08:15.701Z] Total : 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:08:10.688 00:08:11.254 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:11.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.512 Nvme0n1 : 2.00 10766.00 42.05 0.00 0.00 0.00 0.00 0.00 00:08:11.512 [2024-11-17T08:08:16.525Z] =================================================================================================================== 00:08:11.512 [2024-11-17T08:08:16.525Z] Total : 10766.00 42.05 0.00 0.00 0.00 0.00 0.00 00:08:11.512 00:08:11.512 true 00:08:11.512 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:11.512 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:11.771 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:11.771 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:11.771 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2859072 00:08:12.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.337 Nvme0n1 : 3.00 10775.67 42.09 0.00 0.00 0.00 0.00 0.00 00:08:12.337 [2024-11-17T08:08:17.350Z] =================================================================================================================== 00:08:12.337 [2024-11-17T08:08:17.350Z] Total : 10775.67 42.09 0.00 0.00 0.00 0.00 0.00 00:08:12.337 00:08:13.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.270 Nvme0n1 : 4.00 10780.50 42.11 0.00 0.00 0.00 0.00 0.00 00:08:13.270 [2024-11-17T08:08:18.283Z] =================================================================================================================== 00:08:13.270 [2024-11-17T08:08:18.283Z] Total : 10780.50 42.11 0.00 0.00 0.00 0.00 0.00 00:08:13.270 00:08:14.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.644 Nvme0n1 : 5.00 10796.40 42.17 0.00 0.00 0.00 0.00 0.00 00:08:14.644 [2024-11-17T08:08:19.657Z] =================================================================================================================== 00:08:14.644 [2024-11-17T08:08:19.657Z] Total : 10796.40 42.17 0.00 0.00 0.00 0.00 0.00 00:08:14.644 00:08:15.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.578 Nvme0n1 : 6.00 10796.17 42.17 0.00 0.00 0.00 0.00 0.00 00:08:15.578 [2024-11-17T08:08:20.591Z] =================================================================================================================== 00:08:15.578 [2024-11-17T08:08:20.591Z] Total : 10796.17 42.17 0.00 0.00 0.00 0.00 0.00 00:08:15.578 00:08:16.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.513 Nvme0n1 : 7.00 10805.29 42.21 0.00 0.00 0.00 0.00 0.00 00:08:16.513 [2024-11-17T08:08:21.526Z] =================================================================================================================== 00:08:16.513 [2024-11-17T08:08:21.526Z] Total : 10805.29 42.21 0.00 0.00 0.00 0.00 0.00 00:08:16.513 00:08:17.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.446 Nvme0n1 : 8.00 10812.50 42.24 0.00 0.00 0.00 0.00 0.00 00:08:17.446 [2024-11-17T08:08:22.459Z] =================================================================================================================== 00:08:17.446 [2024-11-17T08:08:22.459Z] Total : 10812.50 42.24 0.00 0.00 0.00 0.00 0.00 00:08:17.446 00:08:18.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.438 Nvme0n1 : 9.00 10824.67 42.28 0.00 0.00 0.00 0.00 0.00 00:08:18.438 [2024-11-17T08:08:23.451Z] =================================================================================================================== 00:08:18.438 [2024-11-17T08:08:23.451Z] Total : 10824.67 42.28 0.00 0.00 0.00 0.00 0.00 00:08:18.438 00:08:19.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.399 Nvme0n1 : 10.00 10847.10 42.37 0.00 0.00 0.00 0.00 0.00 00:08:19.399 [2024-11-17T08:08:24.413Z] =================================================================================================================== 00:08:19.400 [2024-11-17T08:08:24.413Z] Total : 10847.10 42.37 0.00 0.00 0.00 0.00 0.00 00:08:19.400 00:08:19.400 00:08:19.400 Latency(us) 00:08:19.400 [2024-11-17T08:08:24.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.400 Nvme0n1 : 10.01 10850.27 42.38 0.00 0.00 11790.27 5097.24 22719.15 00:08:19.400 [2024-11-17T08:08:24.413Z] =================================================================================================================== 00:08:19.400 [2024-11-17T08:08:24.413Z] Total : 10850.27 42.38 0.00 0.00 11790.27 5097.24 22719.15 00:08:19.400 { 00:08:19.400 "results": [ 00:08:19.400 { 00:08:19.400 "job": "Nvme0n1", 00:08:19.400 "core_mask": "0x2", 00:08:19.400 "workload": "randwrite", 00:08:19.400 "status": "finished", 00:08:19.400 "queue_depth": 128, 00:08:19.400 "io_size": 4096, 00:08:19.400 "runtime": 10.008872, 00:08:19.400 "iops": 10850.273637229051, 00:08:19.400 "mibps": 42.38388139542598, 00:08:19.400 "io_failed": 0, 00:08:19.400 "io_timeout": 0, 00:08:19.400 "avg_latency_us": 11790.270428054551, 00:08:19.400 "min_latency_us": 5097.2444444444445, 00:08:19.400 "max_latency_us": 22719.146666666667 00:08:19.400 } 00:08:19.400 ], 00:08:19.400 "core_count": 1 00:08:19.400 } 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2858807 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2858807 ']' 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2858807 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858807 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858807' 00:08:19.400 killing process with pid 2858807 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2858807 00:08:19.400 Received shutdown signal, test time was about 10.000000 seconds 00:08:19.400 00:08:19.400 Latency(us) 00:08:19.400 [2024-11-17T08:08:24.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.400 [2024-11-17T08:08:24.413Z] =================================================================================================================== 00:08:19.400 [2024-11-17T08:08:24.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:19.400 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2858807 00:08:20.333 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.591 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.849 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:20.849 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.108 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.108 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:21.108 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.368 [2024-11-17 09:08:26.329564] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:21.368 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:21.368 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:21.368 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:21.368 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.626 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.626 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.627 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.627 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.627 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.627 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.627 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:21.627 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:21.885 request: 00:08:21.885 { 00:08:21.885 "uuid": "8bf60ed9-efdb-4d3e-a4f9-1ad098127460", 00:08:21.885 "method": "bdev_lvol_get_lvstores", 00:08:21.885 "req_id": 1 00:08:21.885 } 00:08:21.885 Got JSON-RPC error response 00:08:21.885 response: 00:08:21.885 { 00:08:21.885 "code": -19, 00:08:21.885 "message": "No such device" 00:08:21.885 } 00:08:21.885 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:21.885 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.885 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.885 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.885 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.143 aio_bdev 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e2b6efa7-c4b3-42d1-8e03-bb9d325e4959 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e2b6efa7-c4b3-42d1-8e03-bb9d325e4959 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.143 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.401 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e2b6efa7-c4b3-42d1-8e03-bb9d325e4959 -t 2000 00:08:22.660 [ 00:08:22.660 { 00:08:22.660 "name": "e2b6efa7-c4b3-42d1-8e03-bb9d325e4959", 00:08:22.660 "aliases": [ 00:08:22.660 "lvs/lvol" 00:08:22.660 ], 00:08:22.660 "product_name": "Logical Volume", 00:08:22.660 "block_size": 4096, 00:08:22.660 "num_blocks": 38912, 00:08:22.660 "uuid": "e2b6efa7-c4b3-42d1-8e03-bb9d325e4959", 00:08:22.660 "assigned_rate_limits": { 00:08:22.660 "rw_ios_per_sec": 0, 00:08:22.660 "rw_mbytes_per_sec": 0, 00:08:22.660 "r_mbytes_per_sec": 0, 00:08:22.660 "w_mbytes_per_sec": 0 00:08:22.660 }, 00:08:22.660 "claimed": false, 00:08:22.660 "zoned": false, 00:08:22.660 "supported_io_types": { 00:08:22.660 "read": true, 00:08:22.660 "write": true, 00:08:22.660 "unmap": true, 00:08:22.660 "flush": false, 00:08:22.660 "reset": true, 00:08:22.660 "nvme_admin": false, 00:08:22.660 "nvme_io": false, 00:08:22.660 "nvme_io_md": false, 00:08:22.660 "write_zeroes": true, 00:08:22.660 "zcopy": false, 00:08:22.660 "get_zone_info": false, 00:08:22.660 "zone_management": false, 00:08:22.660 "zone_append": false, 00:08:22.660 "compare": false, 00:08:22.660 "compare_and_write": false, 00:08:22.660 "abort": false, 00:08:22.660 "seek_hole": true, 00:08:22.660 "seek_data": true, 00:08:22.660 "copy": false, 00:08:22.660 "nvme_iov_md": false 00:08:22.660 }, 00:08:22.660 "driver_specific": { 00:08:22.660 "lvol": { 00:08:22.660 "lvol_store_uuid": "8bf60ed9-efdb-4d3e-a4f9-1ad098127460", 00:08:22.660 "base_bdev": "aio_bdev", 00:08:22.660 "thin_provision": false, 00:08:22.660 "num_allocated_clusters": 38, 00:08:22.660 "snapshot": false, 00:08:22.660 "clone": false, 00:08:22.660 "esnap_clone": false 00:08:22.660 } 00:08:22.660 } 00:08:22.660 } 00:08:22.660 ] 00:08:22.660 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:22.660 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:22.660 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:22.918 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:22.918 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:22.918 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.176 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.176 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2b6efa7-c4b3-42d1-8e03-bb9d325e4959 00:08:23.435 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8bf60ed9-efdb-4d3e-a4f9-1ad098127460 00:08:23.693 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.951 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.951 00:08:23.951 real 0m19.454s 00:08:23.951 user 0m19.188s 00:08:23.951 sys 0m1.878s 00:08:23.951 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.951 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:23.951 ************************************ 00:08:23.951 END TEST lvs_grow_clean 00:08:23.951 ************************************ 00:08:24.209 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:24.209 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.209 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.209 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.209 ************************************ 00:08:24.209 START TEST lvs_grow_dirty 00:08:24.209 ************************************ 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.209 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.467 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.467 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.724 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:24.724 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:24.724 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.982 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.982 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.982 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 lvol 150 00:08:25.240 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:25.240 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.240 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.498 [2024-11-17 09:08:30.479420] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.498 [2024-11-17 09:08:30.479558] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.498 true 00:08:25.498 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:25.498 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.064 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.065 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.322 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:26.580 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.839 [2024-11-17 09:08:31.663283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.839 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2861131 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2861131 /var/tmp/bdevperf.sock 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2861131 ']' 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.097 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.097 [2024-11-17 09:08:32.038508] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:27.097 [2024-11-17 09:08:32.038642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861131 ] 00:08:27.355 [2024-11-17 09:08:32.182680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.355 [2024-11-17 09:08:32.318457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.288 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.288 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:28.288 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:28.546 Nvme0n1 00:08:28.811 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:28.811 [ 00:08:28.811 { 00:08:28.811 "name": "Nvme0n1", 00:08:28.811 "aliases": [ 00:08:28.811 "fc38ee3d-194b-48c3-a83f-d9d880786c6e" 00:08:28.811 ], 00:08:28.811 "product_name": "NVMe disk", 00:08:28.811 "block_size": 4096, 00:08:28.811 "num_blocks": 38912, 00:08:28.811 "uuid": "fc38ee3d-194b-48c3-a83f-d9d880786c6e", 00:08:28.811 "numa_id": 0, 00:08:28.811 "assigned_rate_limits": { 00:08:28.811 "rw_ios_per_sec": 0, 00:08:28.811 "rw_mbytes_per_sec": 0, 00:08:28.811 "r_mbytes_per_sec": 0, 00:08:28.811 "w_mbytes_per_sec": 0 00:08:28.811 }, 00:08:28.811 "claimed": false, 00:08:28.811 "zoned": false, 00:08:28.811 "supported_io_types": { 00:08:28.811 "read": true, 00:08:28.811 "write": true, 00:08:28.811 "unmap": true, 00:08:28.811 "flush": true, 00:08:28.811 "reset": true, 00:08:28.811 "nvme_admin": true, 00:08:28.811 "nvme_io": true, 00:08:28.811 "nvme_io_md": false, 00:08:28.811 "write_zeroes": true, 00:08:28.811 "zcopy": false, 00:08:28.811 "get_zone_info": false, 00:08:28.811 "zone_management": false, 00:08:28.811 "zone_append": false, 00:08:28.811 "compare": true, 00:08:28.811 "compare_and_write": true, 00:08:28.811 "abort": true, 00:08:28.811 "seek_hole": false, 00:08:28.811 "seek_data": false, 00:08:28.811 "copy": true, 00:08:28.811 "nvme_iov_md": false 00:08:28.811 }, 00:08:28.811 "memory_domains": [ 00:08:28.811 { 00:08:28.811 "dma_device_id": "system", 00:08:28.811 "dma_device_type": 1 00:08:28.811 } 00:08:28.811 ], 00:08:28.811 "driver_specific": { 00:08:28.811 "nvme": [ 00:08:28.811 { 00:08:28.811 "trid": { 00:08:28.811 "trtype": "TCP", 00:08:28.811 "adrfam": "IPv4", 00:08:28.811 "traddr": "10.0.0.2", 00:08:28.811 "trsvcid": "4420", 00:08:28.811 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:28.811 }, 00:08:28.811 "ctrlr_data": { 00:08:28.811 "cntlid": 1, 00:08:28.811 "vendor_id": "0x8086", 00:08:28.811 "model_number": "SPDK bdev Controller", 00:08:28.811 "serial_number": "SPDK0", 00:08:28.811 "firmware_revision": "25.01", 00:08:28.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.811 "oacs": { 00:08:28.811 "security": 0, 00:08:28.811 "format": 0, 00:08:28.811 "firmware": 0, 00:08:28.811 "ns_manage": 0 00:08:28.811 }, 00:08:28.811 "multi_ctrlr": true, 00:08:28.811 "ana_reporting": false 00:08:28.811 }, 00:08:28.811 "vs": { 00:08:28.811 "nvme_version": "1.3" 00:08:28.811 }, 00:08:28.811 "ns_data": { 00:08:28.811 "id": 1, 00:08:28.811 "can_share": true 00:08:28.811 } 00:08:28.811 } 00:08:28.811 ], 00:08:28.811 "mp_policy": "active_passive" 00:08:28.811 } 00:08:28.811 } 00:08:28.811 ] 00:08:29.069 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2861401 00:08:29.069 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.069 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.069 Running I/O for 10 seconds... 00:08:30.003 Latency(us) 00:08:30.003 [2024-11-17T08:08:35.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.003 Nvme0n1 : 1.00 10415.00 40.68 0.00 0.00 0.00 0.00 0.00 00:08:30.003 [2024-11-17T08:08:35.016Z] =================================================================================================================== 00:08:30.003 [2024-11-17T08:08:35.016Z] Total : 10415.00 40.68 0.00 0.00 0.00 0.00 0.00 00:08:30.003 00:08:30.938 09:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:31.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.196 Nvme0n1 : 2.00 10510.50 41.06 0.00 0.00 0.00 0.00 0.00 00:08:31.196 [2024-11-17T08:08:36.209Z] =================================================================================================================== 00:08:31.196 [2024-11-17T08:08:36.209Z] Total : 10510.50 41.06 0.00 0.00 0.00 0.00 0.00 00:08:31.196 00:08:31.196 true 00:08:31.196 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:31.196 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.454 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.454 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.454 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2861401 00:08:32.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.019 Nvme0n1 : 3.00 10563.00 41.26 0.00 0.00 0.00 0.00 0.00 00:08:32.019 [2024-11-17T08:08:37.032Z] =================================================================================================================== 00:08:32.019 [2024-11-17T08:08:37.032Z] Total : 10563.00 41.26 0.00 0.00 0.00 0.00 0.00 00:08:32.019 00:08:32.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.953 Nvme0n1 : 4.00 10621.00 41.49 0.00 0.00 0.00 0.00 0.00 00:08:32.953 [2024-11-17T08:08:37.966Z] =================================================================================================================== 00:08:32.953 [2024-11-17T08:08:37.966Z] Total : 10621.00 41.49 0.00 0.00 0.00 0.00 0.00 00:08:32.953 00:08:34.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.328 Nvme0n1 : 5.00 10655.80 41.62 0.00 0.00 0.00 0.00 0.00 00:08:34.328 [2024-11-17T08:08:39.341Z] =================================================================================================================== 00:08:34.328 [2024-11-17T08:08:39.341Z] Total : 10655.80 41.62 0.00 0.00 0.00 0.00 0.00 00:08:34.328 00:08:35.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.261 Nvme0n1 : 6.00 10700.17 41.80 0.00 0.00 0.00 0.00 0.00 00:08:35.261 [2024-11-17T08:08:40.274Z] =================================================================================================================== 00:08:35.261 [2024-11-17T08:08:40.274Z] Total : 10700.17 41.80 0.00 0.00 0.00 0.00 0.00 00:08:35.261 00:08:36.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.195 Nvme0n1 : 7.00 10731.86 41.92 0.00 0.00 0.00 0.00 0.00 00:08:36.195 [2024-11-17T08:08:41.208Z] =================================================================================================================== 00:08:36.195 [2024-11-17T08:08:41.208Z] Total : 10731.86 41.92 0.00 0.00 0.00 0.00 0.00 00:08:36.195 00:08:37.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.129 Nvme0n1 : 8.00 10755.62 42.01 0.00 0.00 0.00 0.00 0.00 00:08:37.129 [2024-11-17T08:08:42.142Z] =================================================================================================================== 00:08:37.129 [2024-11-17T08:08:42.142Z] Total : 10755.62 42.01 0.00 0.00 0.00 0.00 0.00 00:08:37.129 00:08:38.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.064 Nvme0n1 : 9.00 10788.22 42.14 0.00 0.00 0.00 0.00 0.00 00:08:38.064 [2024-11-17T08:08:43.077Z] =================================================================================================================== 00:08:38.064 [2024-11-17T08:08:43.077Z] Total : 10788.22 42.14 0.00 0.00 0.00 0.00 0.00 00:08:38.064 00:08:38.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.998 Nvme0n1 : 10.00 10801.60 42.19 0.00 0.00 0.00 0.00 0.00 00:08:38.998 [2024-11-17T08:08:44.011Z] =================================================================================================================== 00:08:38.998 [2024-11-17T08:08:44.011Z] Total : 10801.60 42.19 0.00 0.00 0.00 0.00 0.00 00:08:38.998 00:08:38.998 00:08:38.998 Latency(us) 00:08:38.998 [2024-11-17T08:08:44.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.998 Nvme0n1 : 10.00 10809.26 42.22 0.00 0.00 11835.21 7330.32 22719.15 00:08:38.998 [2024-11-17T08:08:44.011Z] =================================================================================================================== 00:08:38.998 [2024-11-17T08:08:44.011Z] Total : 10809.26 42.22 0.00 0.00 11835.21 7330.32 22719.15 00:08:38.998 { 00:08:38.998 "results": [ 00:08:38.998 { 00:08:38.998 "job": "Nvme0n1", 00:08:38.998 "core_mask": "0x2", 00:08:38.998 "workload": "randwrite", 00:08:38.998 "status": "finished", 00:08:38.998 "queue_depth": 128, 00:08:38.998 "io_size": 4096, 00:08:38.998 "runtime": 10.004756, 00:08:38.998 "iops": 10809.259116364257, 00:08:38.998 "mibps": 42.22366842329788, 00:08:38.998 "io_failed": 0, 00:08:38.998 "io_timeout": 0, 00:08:38.998 "avg_latency_us": 11835.208099379155, 00:08:38.998 "min_latency_us": 7330.322962962963, 00:08:38.998 "max_latency_us": 22719.146666666667 00:08:38.998 } 00:08:38.998 ], 00:08:38.998 "core_count": 1 00:08:38.998 } 00:08:38.998 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2861131 00:08:38.998 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2861131 ']' 00:08:38.998 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2861131 00:08:38.998 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:38.998 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.998 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861131 00:08:39.256 09:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.256 09:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.256 09:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861131' 00:08:39.256 killing process with pid 2861131 00:08:39.256 09:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2861131 00:08:39.256 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.256 00:08:39.256 Latency(us) 00:08:39.256 [2024-11-17T08:08:44.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.256 [2024-11-17T08:08:44.269Z] =================================================================================================================== 00:08:39.256 [2024-11-17T08:08:44.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.256 09:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2861131 00:08:40.190 09:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.448 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.706 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:40.706 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2858236 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2858236 00:08:40.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2858236 Killed "${NVMF_APP[@]}" "$@" 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2862876 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2862876 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2862876 ']' 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.964 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 [2024-11-17 09:08:45.900531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:40.964 [2024-11-17 09:08:45.900708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.222 [2024-11-17 09:08:46.054617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.222 [2024-11-17 09:08:46.188543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.222 [2024-11-17 09:08:46.188642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.222 [2024-11-17 09:08:46.188669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.222 [2024-11-17 09:08:46.188693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.222 [2024-11-17 09:08:46.188713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.222 [2024-11-17 09:08:46.190342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.155 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.413 [2024-11-17 09:08:47.227502] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:42.413 [2024-11-17 09:08:47.227729] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:42.413 [2024-11-17 09:08:47.227810] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.413 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.670 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fc38ee3d-194b-48c3-a83f-d9d880786c6e -t 2000 00:08:42.928 [ 00:08:42.928 { 00:08:42.928 "name": "fc38ee3d-194b-48c3-a83f-d9d880786c6e", 00:08:42.928 "aliases": [ 00:08:42.928 "lvs/lvol" 00:08:42.928 ], 00:08:42.928 "product_name": "Logical Volume", 00:08:42.928 "block_size": 4096, 00:08:42.928 "num_blocks": 38912, 00:08:42.928 "uuid": "fc38ee3d-194b-48c3-a83f-d9d880786c6e", 00:08:42.928 "assigned_rate_limits": { 00:08:42.928 "rw_ios_per_sec": 0, 00:08:42.928 "rw_mbytes_per_sec": 0, 00:08:42.928 "r_mbytes_per_sec": 0, 00:08:42.928 "w_mbytes_per_sec": 0 00:08:42.928 }, 00:08:42.928 "claimed": false, 00:08:42.928 "zoned": false, 00:08:42.928 "supported_io_types": { 00:08:42.928 "read": true, 00:08:42.928 "write": true, 00:08:42.928 "unmap": true, 00:08:42.928 "flush": false, 00:08:42.928 "reset": true, 00:08:42.928 "nvme_admin": false, 00:08:42.928 "nvme_io": false, 00:08:42.928 "nvme_io_md": false, 00:08:42.928 "write_zeroes": true, 00:08:42.928 "zcopy": false, 00:08:42.928 "get_zone_info": false, 00:08:42.928 "zone_management": false, 00:08:42.928 "zone_append": false, 00:08:42.928 "compare": false, 00:08:42.928 "compare_and_write": false, 00:08:42.928 "abort": false, 00:08:42.928 "seek_hole": true, 00:08:42.928 "seek_data": true, 00:08:42.928 "copy": false, 00:08:42.928 "nvme_iov_md": false 00:08:42.928 }, 00:08:42.928 "driver_specific": { 00:08:42.928 "lvol": { 00:08:42.928 "lvol_store_uuid": "8aa6ccdb-4b53-43bc-885b-154a0bc4acf6", 00:08:42.928 "base_bdev": "aio_bdev", 00:08:42.928 "thin_provision": false, 00:08:42.928 "num_allocated_clusters": 38, 00:08:42.928 "snapshot": false, 00:08:42.928 "clone": false, 00:08:42.928 "esnap_clone": false 00:08:42.928 } 00:08:42.928 } 00:08:42.928 } 00:08:42.928 ] 00:08:42.928 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:42.928 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:42.928 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:43.186 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:43.186 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:43.186 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:43.444 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:43.444 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.703 [2024-11-17 09:08:48.624280] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.703 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:43.993 request: 00:08:43.993 { 00:08:43.993 "uuid": "8aa6ccdb-4b53-43bc-885b-154a0bc4acf6", 00:08:43.993 "method": "bdev_lvol_get_lvstores", 00:08:43.993 "req_id": 1 00:08:43.993 } 00:08:43.993 Got JSON-RPC error response 00:08:43.993 response: 00:08:43.993 { 00:08:43.993 "code": -19, 00:08:43.993 "message": "No such device" 00:08:43.993 } 00:08:43.993 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:43.993 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.993 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.993 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.993 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.280 aio_bdev 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.280 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:44.851 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fc38ee3d-194b-48c3-a83f-d9d880786c6e -t 2000 00:08:44.851 [ 00:08:44.851 { 00:08:44.851 "name": "fc38ee3d-194b-48c3-a83f-d9d880786c6e", 00:08:44.851 "aliases": [ 00:08:44.851 "lvs/lvol" 00:08:44.851 ], 00:08:44.851 "product_name": "Logical Volume", 00:08:44.851 "block_size": 4096, 00:08:44.851 "num_blocks": 38912, 00:08:44.851 "uuid": "fc38ee3d-194b-48c3-a83f-d9d880786c6e", 00:08:44.851 "assigned_rate_limits": { 00:08:44.851 "rw_ios_per_sec": 0, 00:08:44.851 "rw_mbytes_per_sec": 0, 00:08:44.851 "r_mbytes_per_sec": 0, 00:08:44.851 "w_mbytes_per_sec": 0 00:08:44.851 }, 00:08:44.851 "claimed": false, 00:08:44.851 "zoned": false, 00:08:44.851 "supported_io_types": { 00:08:44.851 "read": true, 00:08:44.851 "write": true, 00:08:44.851 "unmap": true, 00:08:44.851 "flush": false, 00:08:44.851 "reset": true, 00:08:44.851 "nvme_admin": false, 00:08:44.851 "nvme_io": false, 00:08:44.851 "nvme_io_md": false, 00:08:44.851 "write_zeroes": true, 00:08:44.851 "zcopy": false, 00:08:44.851 "get_zone_info": false, 00:08:44.851 "zone_management": false, 00:08:44.851 "zone_append": false, 00:08:44.851 "compare": false, 00:08:44.851 "compare_and_write": false, 00:08:44.851 "abort": false, 00:08:44.851 "seek_hole": true, 00:08:44.851 "seek_data": true, 00:08:44.851 "copy": false, 00:08:44.851 "nvme_iov_md": false 00:08:44.851 }, 00:08:44.851 "driver_specific": { 00:08:44.851 "lvol": { 00:08:44.851 "lvol_store_uuid": "8aa6ccdb-4b53-43bc-885b-154a0bc4acf6", 00:08:44.851 "base_bdev": "aio_bdev", 00:08:44.851 "thin_provision": false, 00:08:44.851 "num_allocated_clusters": 38, 00:08:44.851 "snapshot": false, 00:08:44.851 "clone": false, 00:08:44.851 "esnap_clone": false 00:08:44.851 } 00:08:44.851 } 00:08:44.851 } 00:08:44.851 ] 00:08:44.851 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:44.851 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:44.851 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:45.110 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:45.368 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:45.368 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:45.626 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:45.626 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc38ee3d-194b-48c3-a83f-d9d880786c6e 00:08:45.884 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8aa6ccdb-4b53-43bc-885b-154a0bc4acf6 00:08:46.143 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.402 00:08:46.402 real 0m22.253s 00:08:46.402 user 0m56.073s 00:08:46.402 sys 0m4.669s 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.402 ************************************ 00:08:46.402 END TEST lvs_grow_dirty 00:08:46.402 ************************************ 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:46.402 nvmf_trace.0 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.402 rmmod nvme_tcp 00:08:46.402 rmmod nvme_fabrics 00:08:46.402 rmmod nvme_keyring 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2862876 ']' 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2862876 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2862876 ']' 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2862876 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.402 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862876 00:08:46.660 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.660 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.660 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862876' 00:08:46.660 killing process with pid 2862876 00:08:46.660 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2862876 00:08:46.660 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2862876 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.594 09:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.128 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.129 00:08:50.129 real 0m48.904s 00:08:50.129 user 1m23.354s 00:08:50.129 sys 0m8.686s 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.129 ************************************ 00:08:50.129 END TEST nvmf_lvs_grow 00:08:50.129 ************************************ 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.129 ************************************ 00:08:50.129 START TEST nvmf_bdev_io_wait 00:08:50.129 ************************************ 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.129 * Looking for test storage... 00:08:50.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.129 --rc genhtml_branch_coverage=1 00:08:50.129 --rc genhtml_function_coverage=1 00:08:50.129 --rc genhtml_legend=1 00:08:50.129 --rc geninfo_all_blocks=1 00:08:50.129 --rc geninfo_unexecuted_blocks=1 00:08:50.129 00:08:50.129 ' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.129 --rc genhtml_branch_coverage=1 00:08:50.129 --rc genhtml_function_coverage=1 00:08:50.129 --rc genhtml_legend=1 00:08:50.129 --rc geninfo_all_blocks=1 00:08:50.129 --rc geninfo_unexecuted_blocks=1 00:08:50.129 00:08:50.129 ' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.129 --rc genhtml_branch_coverage=1 00:08:50.129 --rc genhtml_function_coverage=1 00:08:50.129 --rc genhtml_legend=1 00:08:50.129 --rc geninfo_all_blocks=1 00:08:50.129 --rc geninfo_unexecuted_blocks=1 00:08:50.129 00:08:50.129 ' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.129 --rc genhtml_branch_coverage=1 00:08:50.129 --rc genhtml_function_coverage=1 00:08:50.129 --rc genhtml_legend=1 00:08:50.129 --rc geninfo_all_blocks=1 00:08:50.129 --rc geninfo_unexecuted_blocks=1 00:08:50.129 00:08:50.129 ' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.129 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.130 09:08:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.035 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:08:52.036 00:08:52.036 --- 10.0.0.2 ping statistics --- 00:08:52.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.036 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:08:52.036 00:08:52.036 --- 10.0.0.1 ping statistics --- 00:08:52.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.036 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2865603 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2865603 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2865603 ']' 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.036 09:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.295 [2024-11-17 09:08:57.048059] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:52.295 [2024-11-17 09:08:57.048217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.295 [2024-11-17 09:08:57.203770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.553 [2024-11-17 09:08:57.348187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.553 [2024-11-17 09:08:57.348277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.553 [2024-11-17 09:08:57.348304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.553 [2024-11-17 09:08:57.348329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.553 [2024-11-17 09:08:57.348349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.553 [2024-11-17 09:08:57.351218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.553 [2024-11-17 09:08:57.351290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.553 [2024-11-17 09:08:57.351419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.553 [2024-11-17 09:08:57.351428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.119 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 [2024-11-17 09:08:58.296125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.378 Malloc0 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.378 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.637 [2024-11-17 09:08:58.402596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2865838 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.637 { 00:08:53.637 "params": { 00:08:53.637 "name": "Nvme$subsystem", 00:08:53.637 "trtype": "$TEST_TRANSPORT", 00:08:53.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.637 "adrfam": "ipv4", 00:08:53.637 "trsvcid": "$NVMF_PORT", 00:08:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.637 "hdgst": ${hdgst:-false}, 00:08:53.637 "ddgst": ${ddgst:-false} 00:08:53.637 }, 00:08:53.637 "method": "bdev_nvme_attach_controller" 00:08:53.637 } 00:08:53.637 EOF 00:08:53.637 )") 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2865840 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2865843 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.637 { 00:08:53.637 "params": { 00:08:53.637 "name": "Nvme$subsystem", 00:08:53.637 "trtype": "$TEST_TRANSPORT", 00:08:53.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.637 "adrfam": "ipv4", 00:08:53.637 "trsvcid": "$NVMF_PORT", 00:08:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.637 "hdgst": ${hdgst:-false}, 00:08:53.637 "ddgst": ${ddgst:-false} 00:08:53.637 }, 00:08:53.637 "method": "bdev_nvme_attach_controller" 00:08:53.637 } 00:08:53.637 EOF 00:08:53.637 )") 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2865846 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.637 { 00:08:53.637 "params": { 00:08:53.637 "name": "Nvme$subsystem", 00:08:53.637 "trtype": "$TEST_TRANSPORT", 00:08:53.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.637 "adrfam": "ipv4", 00:08:53.637 "trsvcid": "$NVMF_PORT", 00:08:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.637 "hdgst": ${hdgst:-false}, 00:08:53.637 "ddgst": ${ddgst:-false} 00:08:53.637 }, 00:08:53.637 "method": "bdev_nvme_attach_controller" 00:08:53.637 } 00:08:53.637 EOF 00:08:53.637 )") 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.637 { 00:08:53.637 "params": { 00:08:53.637 "name": "Nvme$subsystem", 00:08:53.637 "trtype": "$TEST_TRANSPORT", 00:08:53.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.637 "adrfam": "ipv4", 00:08:53.637 "trsvcid": "$NVMF_PORT", 00:08:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.637 "hdgst": ${hdgst:-false}, 00:08:53.637 "ddgst": ${ddgst:-false} 00:08:53.637 }, 00:08:53.637 "method": "bdev_nvme_attach_controller" 00:08:53.637 } 00:08:53.637 EOF 00:08:53.637 )") 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2865838 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.637 "params": { 00:08:53.637 "name": "Nvme1", 00:08:53.637 "trtype": "tcp", 00:08:53.637 "traddr": "10.0.0.2", 00:08:53.637 "adrfam": "ipv4", 00:08:53.637 "trsvcid": "4420", 00:08:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.637 "hdgst": false, 00:08:53.637 "ddgst": false 00:08:53.637 }, 00:08:53.637 "method": "bdev_nvme_attach_controller" 00:08:53.637 }' 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.637 "params": { 00:08:53.637 "name": "Nvme1", 00:08:53.637 "trtype": "tcp", 00:08:53.637 "traddr": "10.0.0.2", 00:08:53.637 "adrfam": "ipv4", 00:08:53.637 "trsvcid": "4420", 00:08:53.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.637 "hdgst": false, 00:08:53.637 "ddgst": false 00:08:53.637 }, 00:08:53.637 "method": "bdev_nvme_attach_controller" 00:08:53.637 }' 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.637 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.637 "params": { 00:08:53.638 "name": "Nvme1", 00:08:53.638 "trtype": "tcp", 00:08:53.638 "traddr": "10.0.0.2", 00:08:53.638 "adrfam": "ipv4", 00:08:53.638 "trsvcid": "4420", 00:08:53.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.638 "hdgst": false, 00:08:53.638 "ddgst": false 00:08:53.638 }, 00:08:53.638 "method": "bdev_nvme_attach_controller" 00:08:53.638 }' 00:08:53.638 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.638 09:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.638 "params": { 00:08:53.638 "name": "Nvme1", 00:08:53.638 "trtype": "tcp", 00:08:53.638 "traddr": "10.0.0.2", 00:08:53.638 "adrfam": "ipv4", 00:08:53.638 "trsvcid": "4420", 00:08:53.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.638 "hdgst": false, 00:08:53.638 "ddgst": false 00:08:53.638 }, 00:08:53.638 "method": "bdev_nvme_attach_controller" 00:08:53.638 }' 00:08:53.638 [2024-11-17 09:08:58.492003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:53.638 [2024-11-17 09:08:58.492001] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:53.638 [2024-11-17 09:08:58.492154] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 09:08:58.492156] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:53.638 --proc-type=auto ] 00:08:53.638 [2024-11-17 09:08:58.494191] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:53.638 [2024-11-17 09:08:58.494331] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:53.638 [2024-11-17 09:08:58.522672] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:53.638 [2024-11-17 09:08:58.522872] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:53.896 [2024-11-17 09:08:58.724099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.896 [2024-11-17 09:08:58.823937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.896 [2024-11-17 09:08:58.841527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:54.154 [2024-11-17 09:08:58.926489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.154 [2024-11-17 09:08:58.948668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.154 [2024-11-17 09:08:59.003535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.154 [2024-11-17 09:08:59.050325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:54.154 [2024-11-17 09:08:59.120112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:54.411 Running I/O for 1 seconds... 00:08:54.411 Running I/O for 1 seconds... 00:08:54.669 Running I/O for 1 seconds... 00:08:54.669 Running I/O for 1 seconds... 00:08:55.603 4979.00 IOPS, 19.45 MiB/s 00:08:55.603 Latency(us) 00:08:55.603 [2024-11-17T08:09:00.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.603 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:55.603 Nvme1n1 : 1.03 4995.35 19.51 0.00 0.00 25386.86 4563.25 50098.63 00:08:55.603 [2024-11-17T08:09:00.616Z] =================================================================================================================== 00:08:55.603 [2024-11-17T08:09:00.616Z] Total : 4995.35 19.51 0.00 0.00 25386.86 4563.25 50098.63 00:08:55.603 4528.00 IOPS, 17.69 MiB/s 00:08:55.603 Latency(us) 00:08:55.603 [2024-11-17T08:09:00.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.603 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:55.603 Nvme1n1 : 1.01 4627.79 18.08 0.00 0.00 27521.11 8252.68 50875.35 00:08:55.603 [2024-11-17T08:09:00.616Z] =================================================================================================================== 00:08:55.603 [2024-11-17T08:09:00.616Z] Total : 4627.79 18.08 0.00 0.00 27521.11 8252.68 50875.35 00:08:55.603 7209.00 IOPS, 28.16 MiB/s 00:08:55.603 Latency(us) 00:08:55.603 [2024-11-17T08:09:00.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.603 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:55.603 Nvme1n1 : 1.01 7253.50 28.33 0.00 0.00 17538.40 8932.31 27573.67 00:08:55.603 [2024-11-17T08:09:00.616Z] =================================================================================================================== 00:08:55.603 [2024-11-17T08:09:00.616Z] Total : 7253.50 28.33 0.00 0.00 17538.40 8932.31 27573.67 00:08:55.861 153376.00 IOPS, 599.12 MiB/s 00:08:55.861 Latency(us) 00:08:55.861 [2024-11-17T08:09:00.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.861 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:55.861 Nvme1n1 : 1.00 153060.58 597.89 0.00 0.00 832.01 367.12 2014.63 00:08:55.861 [2024-11-17T08:09:00.874Z] =================================================================================================================== 00:08:55.861 [2024-11-17T08:09:00.874Z] Total : 153060.58 597.89 0.00 0.00 832.01 367.12 2014.63 00:08:56.119 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2865840 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2865843 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2865846 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.377 rmmod nvme_tcp 00:08:56.377 rmmod nvme_fabrics 00:08:56.377 rmmod nvme_keyring 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2865603 ']' 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2865603 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2865603 ']' 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2865603 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.377 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865603 00:08:56.635 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.635 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.635 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865603' 00:08:56.635 killing process with pid 2865603 00:08:56.635 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2865603 00:08:56.635 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2865603 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.570 09:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.103 00:09:00.103 real 0m9.829s 00:09:00.103 user 0m28.426s 00:09:00.103 sys 0m4.026s 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.103 ************************************ 00:09:00.103 END TEST nvmf_bdev_io_wait 00:09:00.103 ************************************ 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.103 ************************************ 00:09:00.103 START TEST nvmf_queue_depth 00:09:00.103 ************************************ 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.103 * Looking for test storage... 00:09:00.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:00.103 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.104 --rc genhtml_branch_coverage=1 00:09:00.104 --rc genhtml_function_coverage=1 00:09:00.104 --rc genhtml_legend=1 00:09:00.104 --rc geninfo_all_blocks=1 00:09:00.104 --rc geninfo_unexecuted_blocks=1 00:09:00.104 00:09:00.104 ' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.104 --rc genhtml_branch_coverage=1 00:09:00.104 --rc genhtml_function_coverage=1 00:09:00.104 --rc genhtml_legend=1 00:09:00.104 --rc geninfo_all_blocks=1 00:09:00.104 --rc geninfo_unexecuted_blocks=1 00:09:00.104 00:09:00.104 ' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.104 --rc genhtml_branch_coverage=1 00:09:00.104 --rc genhtml_function_coverage=1 00:09:00.104 --rc genhtml_legend=1 00:09:00.104 --rc geninfo_all_blocks=1 00:09:00.104 --rc geninfo_unexecuted_blocks=1 00:09:00.104 00:09:00.104 ' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.104 --rc genhtml_branch_coverage=1 00:09:00.104 --rc genhtml_function_coverage=1 00:09:00.104 --rc genhtml_legend=1 00:09:00.104 --rc geninfo_all_blocks=1 00:09:00.104 --rc geninfo_unexecuted_blocks=1 00:09:00.104 00:09:00.104 ' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.104 09:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.009 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:09:02.010 00:09:02.010 --- 10.0.0.2 ping statistics --- 00:09:02.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.010 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:02.010 00:09:02.010 --- 10.0.0.1 ping statistics --- 00:09:02.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.010 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2868330 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2868330 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868330 ']' 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.010 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.010 [2024-11-17 09:09:07.004926] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:02.010 [2024-11-17 09:09:07.005075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.268 [2024-11-17 09:09:07.161835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.527 [2024-11-17 09:09:07.298846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.527 [2024-11-17 09:09:07.298941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.527 [2024-11-17 09:09:07.298967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.527 [2024-11-17 09:09:07.298993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.527 [2024-11-17 09:09:07.299014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.527 [2024-11-17 09:09:07.300675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.093 [2024-11-17 09:09:07.996843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.093 09:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.093 Malloc0 00:09:03.093 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.093 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.093 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.093 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.351 [2024-11-17 09:09:08.118502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2868827 00:09:03.351 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2868827 /var/tmp/bdevperf.sock 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2868827 ']' 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.352 09:09:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.352 [2024-11-17 09:09:08.208702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:03.352 [2024-11-17 09:09:08.208848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868827 ] 00:09:03.352 [2024-11-17 09:09:08.358653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.610 [2024-11-17 09:09:08.488592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.542 NVMe0n1 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.542 09:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.542 Running I/O for 10 seconds... 00:09:06.846 5664.00 IOPS, 22.12 MiB/s [2024-11-17T08:09:12.793Z] 5782.00 IOPS, 22.59 MiB/s [2024-11-17T08:09:13.727Z] 5888.67 IOPS, 23.00 MiB/s [2024-11-17T08:09:14.662Z] 5938.00 IOPS, 23.20 MiB/s [2024-11-17T08:09:15.596Z] 5950.40 IOPS, 23.24 MiB/s [2024-11-17T08:09:16.976Z] 5973.83 IOPS, 23.34 MiB/s [2024-11-17T08:09:17.611Z] 5997.71 IOPS, 23.43 MiB/s [2024-11-17T08:09:18.546Z] 6013.75 IOPS, 23.49 MiB/s [2024-11-17T08:09:19.920Z] 6020.11 IOPS, 23.52 MiB/s [2024-11-17T08:09:19.920Z] 6021.70 IOPS, 23.52 MiB/s 00:09:14.907 Latency(us) 00:09:14.907 [2024-11-17T08:09:19.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.907 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.907 Verification LBA range: start 0x0 length 0x4000 00:09:14.907 NVMe0n1 : 10.14 6031.98 23.56 0.00 0.00 168671.79 24466.77 101750.71 00:09:14.907 [2024-11-17T08:09:19.920Z] =================================================================================================================== 00:09:14.907 [2024-11-17T08:09:19.920Z] Total : 6031.98 23.56 0.00 0.00 168671.79 24466.77 101750.71 00:09:14.907 { 00:09:14.907 "results": [ 00:09:14.907 { 00:09:14.907 "job": "NVMe0n1", 00:09:14.907 "core_mask": "0x1", 00:09:14.907 "workload": "verify", 00:09:14.907 "status": "finished", 00:09:14.907 "verify_range": { 00:09:14.907 "start": 0, 00:09:14.907 "length": 16384 00:09:14.907 }, 00:09:14.907 "queue_depth": 1024, 00:09:14.907 "io_size": 4096, 00:09:14.907 "runtime": 10.138961, 00:09:14.907 "iops": 6031.979016390338, 00:09:14.907 "mibps": 23.562418032774758, 00:09:14.907 "io_failed": 0, 00:09:14.907 "io_timeout": 0, 00:09:14.907 "avg_latency_us": 168671.79377103387, 00:09:14.907 "min_latency_us": 24466.773333333334, 00:09:14.907 "max_latency_us": 101750.70814814814 00:09:14.907 } 00:09:14.907 ], 00:09:14.907 "core_count": 1 00:09:14.907 } 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2868827 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868827 ']' 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868827 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868827 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868827' 00:09:14.907 killing process with pid 2868827 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868827 00:09:14.907 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.907 00:09:14.907 Latency(us) 00:09:14.907 [2024-11-17T08:09:19.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.907 [2024-11-17T08:09:19.920Z] =================================================================================================================== 00:09:14.907 [2024-11-17T08:09:19.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.907 09:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868827 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.842 rmmod nvme_tcp 00:09:15.842 rmmod nvme_fabrics 00:09:15.842 rmmod nvme_keyring 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2868330 ']' 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2868330 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2868330 ']' 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2868330 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868330 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868330' 00:09:15.842 killing process with pid 2868330 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2868330 00:09:15.842 09:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2868330 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.218 09:09:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.123 00:09:19.123 real 0m19.548s 00:09:19.123 user 0m27.925s 00:09:19.123 sys 0m3.279s 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.123 ************************************ 00:09:19.123 END TEST nvmf_queue_depth 00:09:19.123 ************************************ 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.123 09:09:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.382 ************************************ 00:09:19.382 START TEST nvmf_target_multipath 00:09:19.382 ************************************ 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:19.382 * Looking for test storage... 00:09:19.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.382 --rc genhtml_branch_coverage=1 00:09:19.382 --rc genhtml_function_coverage=1 00:09:19.382 --rc genhtml_legend=1 00:09:19.382 --rc geninfo_all_blocks=1 00:09:19.382 --rc geninfo_unexecuted_blocks=1 00:09:19.382 00:09:19.382 ' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.382 --rc genhtml_branch_coverage=1 00:09:19.382 --rc genhtml_function_coverage=1 00:09:19.382 --rc genhtml_legend=1 00:09:19.382 --rc geninfo_all_blocks=1 00:09:19.382 --rc geninfo_unexecuted_blocks=1 00:09:19.382 00:09:19.382 ' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.382 --rc genhtml_branch_coverage=1 00:09:19.382 --rc genhtml_function_coverage=1 00:09:19.382 --rc genhtml_legend=1 00:09:19.382 --rc geninfo_all_blocks=1 00:09:19.382 --rc geninfo_unexecuted_blocks=1 00:09:19.382 00:09:19.382 ' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.382 --rc genhtml_branch_coverage=1 00:09:19.382 --rc genhtml_function_coverage=1 00:09:19.382 --rc genhtml_legend=1 00:09:19.382 --rc geninfo_all_blocks=1 00:09:19.382 --rc geninfo_unexecuted_blocks=1 00:09:19.382 00:09:19.382 ' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.382 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.383 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:21.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:21.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.915 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:21.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:21.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:09:21.916 00:09:21.916 --- 10.0.0.2 ping statistics --- 00:09:21.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.916 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:21.916 00:09:21.916 --- 10.0.0.1 ping statistics --- 00:09:21.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.916 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:21.916 only one NIC for nvmf test 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.916 rmmod nvme_tcp 00:09:21.916 rmmod nvme_fabrics 00:09:21.916 rmmod nvme_keyring 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.916 09:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.822 00:09:23.822 real 0m4.447s 00:09:23.822 user 0m0.907s 00:09:23.822 sys 0m1.557s 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.822 ************************************ 00:09:23.822 END TEST nvmf_target_multipath 00:09:23.822 ************************************ 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.822 09:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.822 ************************************ 00:09:23.822 START TEST nvmf_zcopy 00:09:23.823 ************************************ 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:23.823 * Looking for test storage... 00:09:23.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.823 --rc genhtml_branch_coverage=1 00:09:23.823 --rc genhtml_function_coverage=1 00:09:23.823 --rc genhtml_legend=1 00:09:23.823 --rc geninfo_all_blocks=1 00:09:23.823 --rc geninfo_unexecuted_blocks=1 00:09:23.823 00:09:23.823 ' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.823 --rc genhtml_branch_coverage=1 00:09:23.823 --rc genhtml_function_coverage=1 00:09:23.823 --rc genhtml_legend=1 00:09:23.823 --rc geninfo_all_blocks=1 00:09:23.823 --rc geninfo_unexecuted_blocks=1 00:09:23.823 00:09:23.823 ' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.823 --rc genhtml_branch_coverage=1 00:09:23.823 --rc genhtml_function_coverage=1 00:09:23.823 --rc genhtml_legend=1 00:09:23.823 --rc geninfo_all_blocks=1 00:09:23.823 --rc geninfo_unexecuted_blocks=1 00:09:23.823 00:09:23.823 ' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.823 --rc genhtml_branch_coverage=1 00:09:23.823 --rc genhtml_function_coverage=1 00:09:23.823 --rc genhtml_legend=1 00:09:23.823 --rc geninfo_all_blocks=1 00:09:23.823 --rc geninfo_unexecuted_blocks=1 00:09:23.823 00:09:23.823 ' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:23.823 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.824 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.082 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.082 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.082 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.983 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:09:25.984 00:09:25.984 --- 10.0.0.2 ping statistics --- 00:09:25.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.984 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:25.984 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:09:26.242 00:09:26.242 --- 10.0.0.1 ping statistics --- 00:09:26.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.242 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.242 09:09:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2874584 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2874584 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2874584 ']' 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.242 09:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.242 [2024-11-17 09:09:31.131236] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:26.242 [2024-11-17 09:09:31.131419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.501 [2024-11-17 09:09:31.299057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.501 [2024-11-17 09:09:31.438842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.501 [2024-11-17 09:09:31.438940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.501 [2024-11-17 09:09:31.438972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.501 [2024-11-17 09:09:31.438998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.501 [2024-11-17 09:09:31.439019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.501 [2024-11-17 09:09:31.440714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 [2024-11-17 09:09:32.183415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 [2024-11-17 09:09:32.199660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 malloc0 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.435 { 00:09:27.435 "params": { 00:09:27.435 "name": "Nvme$subsystem", 00:09:27.435 "trtype": "$TEST_TRANSPORT", 00:09:27.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.435 "adrfam": "ipv4", 00:09:27.435 "trsvcid": "$NVMF_PORT", 00:09:27.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.435 "hdgst": ${hdgst:-false}, 00:09:27.435 "ddgst": ${ddgst:-false} 00:09:27.435 }, 00:09:27.435 "method": "bdev_nvme_attach_controller" 00:09:27.435 } 00:09:27.435 EOF 00:09:27.435 )") 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:27.435 09:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.435 "params": { 00:09:27.435 "name": "Nvme1", 00:09:27.435 "trtype": "tcp", 00:09:27.435 "traddr": "10.0.0.2", 00:09:27.435 "adrfam": "ipv4", 00:09:27.435 "trsvcid": "4420", 00:09:27.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.435 "hdgst": false, 00:09:27.435 "ddgst": false 00:09:27.435 }, 00:09:27.435 "method": "bdev_nvme_attach_controller" 00:09:27.435 }' 00:09:27.435 [2024-11-17 09:09:32.348874] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:27.435 [2024-11-17 09:09:32.349014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874737 ] 00:09:27.694 [2024-11-17 09:09:32.489268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.694 [2024-11-17 09:09:32.627397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.260 Running I/O for 10 seconds... 00:09:30.129 4134.00 IOPS, 32.30 MiB/s [2024-11-17T08:09:36.518Z] 4150.50 IOPS, 32.43 MiB/s [2024-11-17T08:09:37.453Z] 4167.00 IOPS, 32.55 MiB/s [2024-11-17T08:09:38.388Z] 4177.00 IOPS, 32.63 MiB/s [2024-11-17T08:09:39.323Z] 4182.80 IOPS, 32.68 MiB/s [2024-11-17T08:09:40.257Z] 4176.00 IOPS, 32.62 MiB/s [2024-11-17T08:09:41.192Z] 4162.14 IOPS, 32.52 MiB/s [2024-11-17T08:09:42.567Z] 4167.75 IOPS, 32.56 MiB/s [2024-11-17T08:09:43.501Z] 4165.44 IOPS, 32.54 MiB/s [2024-11-17T08:09:43.501Z] 4163.00 IOPS, 32.52 MiB/s 00:09:38.488 Latency(us) 00:09:38.488 [2024-11-17T08:09:43.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.488 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:38.488 Verification LBA range: start 0x0 length 0x1000 00:09:38.488 Nvme1n1 : 10.02 4168.33 32.57 0.00 0.00 30623.91 728.18 37088.52 00:09:38.488 [2024-11-17T08:09:43.501Z] =================================================================================================================== 00:09:38.488 [2024-11-17T08:09:43.501Z] Total : 4168.33 32.57 0.00 0.00 30623.91 728.18 37088.52 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2876067 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.424 { 00:09:39.424 "params": { 00:09:39.424 "name": "Nvme$subsystem", 00:09:39.424 "trtype": "$TEST_TRANSPORT", 00:09:39.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.424 "adrfam": "ipv4", 00:09:39.424 "trsvcid": "$NVMF_PORT", 00:09:39.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.424 "hdgst": ${hdgst:-false}, 00:09:39.424 "ddgst": ${ddgst:-false} 00:09:39.424 }, 00:09:39.424 "method": "bdev_nvme_attach_controller" 00:09:39.424 } 00:09:39.424 EOF 00:09:39.424 )") 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:39.424 [2024-11-17 09:09:44.073155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.073209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:39.424 09:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.424 "params": { 00:09:39.424 "name": "Nvme1", 00:09:39.424 "trtype": "tcp", 00:09:39.424 "traddr": "10.0.0.2", 00:09:39.424 "adrfam": "ipv4", 00:09:39.424 "trsvcid": "4420", 00:09:39.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.424 "hdgst": false, 00:09:39.424 "ddgst": false 00:09:39.424 }, 00:09:39.424 "method": "bdev_nvme_attach_controller" 00:09:39.424 }' 00:09:39.424 [2024-11-17 09:09:44.081113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.081144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.089101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.089130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.097140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.097168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.105174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.105202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.113174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.113201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.121203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.121231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.129223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.129251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.137229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.137255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.145270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.145297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.153203] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:39.424 [2024-11-17 09:09:44.153307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.153339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.153339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876067 ] 00:09:39.424 [2024-11-17 09:09:44.161360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.424 [2024-11-17 09:09:44.161405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.424 [2024-11-17 09:09:44.169394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.169429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.177364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.177406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.185437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.185473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.193454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.193488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.201466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.201500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.209491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.209526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.217486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.217519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.225532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.225566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.233581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.233610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.241539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.241569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.249577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.249606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.257615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.257661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.265610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.265639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.273666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.273695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.281675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.281704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.289716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.289751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.296550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.425 [2024-11-17 09:09:44.297731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.297760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.305757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.305792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.313838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.313896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.321893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.321953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.329822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.329856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.337868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.337905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.345878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.345913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.353936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.353970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.361945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.361980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.369953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.369987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.378003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.378038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.386023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.386057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.394043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.394078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.402066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.402100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.410069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.410103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.418113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.418148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.426152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.426187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.434135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.425 [2024-11-17 09:09:44.434169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.425 [2024-11-17 09:09:44.435048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.683 [2024-11-17 09:09:44.442185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.442220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.450265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.450310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.458263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.458317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.466295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.466338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.474259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.474292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.482297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.482332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.490335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.490378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.498320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.498354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.506363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.506420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.514392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.514438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.522420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.522453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.530506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.530557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.538504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.538555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.546554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.546606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.554569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.554623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.562508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.562537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.570551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.570581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.578561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.578590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.586593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.586633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.594613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.594660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.602609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.602665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.610670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.610713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.618730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.618765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.626701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.626735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.634757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.634792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.642770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.642804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.650781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.650815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.658820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.658854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.666830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.666862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.674871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.674906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.682970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.683024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.684 [2024-11-17 09:09:44.690956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.684 [2024-11-17 09:09:44.691009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.699024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.699080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.706968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.707003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.714965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.715003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.723002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.723037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.731006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.731040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.739049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.739084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.747075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.747109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.755077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.755111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.763115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.763157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.771137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.771171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.779165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.779199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.787186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.787221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.795184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.795219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.803246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.803285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.811668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.811718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.819666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.819705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.827683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.827721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.835710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.835747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.843737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.843775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.851763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.851802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.859783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.859821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.867834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.867870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.875893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.875932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.883874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.883909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 Running I/O for 5 seconds... 00:09:39.943 [2024-11-17 09:09:44.891980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.892020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.909555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.909595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.923836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.923878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.938694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.938743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.943 [2024-11-17 09:09:44.954054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.943 [2024-11-17 09:09:44.954095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.201 [2024-11-17 09:09:44.966544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:44.966582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:44.981970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:44.982011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:44.996966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:44.997007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.012895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.012936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.028448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.028484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.044404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.044458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.060064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.060105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.075552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.075589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.088634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.088687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.103751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.103793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.119350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.119403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.135480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.135518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.148061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.148102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.163023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.163064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.177354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.177401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.191377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.191426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.202 [2024-11-17 09:09:45.206272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.202 [2024-11-17 09:09:45.206324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.220633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.220672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.234894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.234931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.249338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.249385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.264219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.264268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.278086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.278123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.292745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.292797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.306789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.306827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.320398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.320435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.334381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.334433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.349245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.349298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.363233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.363270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.377774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.377811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.391544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.391581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.405911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.405948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.420119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.420156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.434179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.434215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.448685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.448721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.460 [2024-11-17 09:09:45.462981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.460 [2024-11-17 09:09:45.463018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.477274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.477313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.492498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.492535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.508272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.508314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.523897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.523939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.537492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.537529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.552939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.552980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.568199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.568240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.583521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.583559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.598459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.598496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.613972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.614013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.629418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.629455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.644619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.644674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.659359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.659409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.674388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.674441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.690430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.690466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.705711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.705752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.718 [2024-11-17 09:09:45.721136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.718 [2024-11-17 09:09:45.721177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.734222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.734264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.749058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.749098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.763936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.763977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.778781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.778822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.793949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.793989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.809849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.809890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.825152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.825193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.840465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.840518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.855740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.855784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.871152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.871191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.883317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.883357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 8444.00 IOPS, 65.97 MiB/s [2024-11-17T08:09:45.989Z] [2024-11-17 09:09:45.898051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.898090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.913575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.913611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.926767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.926807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.941828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.941867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.957634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.957689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.976 [2024-11-17 09:09:45.973239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.976 [2024-11-17 09:09:45.973279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:45.988530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:45.988567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.003188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.003228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.018061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.018100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.033388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.033440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.048990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.049040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.064560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.064596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.080072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.080113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.095969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.096009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.111510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.111548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.127287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.127327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.142264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.142304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.157572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.157609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.172730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.172770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.187881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.187921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.202557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.202592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.217293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.217345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.236 [2024-11-17 09:09:46.233182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.236 [2024-11-17 09:09:46.233223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.247719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.247756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.261765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.261807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.277809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.277850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.292812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.292852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.308695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.308737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.324507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.324550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.339597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.339657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.354650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.354702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.369359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.369430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.384798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.384838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.400089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.400129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.414787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.414827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.430420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.430456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.445128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.445168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.461269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.461309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.476440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.476476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.492075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.492114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.530 [2024-11-17 09:09:46.507190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.530 [2024-11-17 09:09:46.507231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.810 [2024-11-17 09:09:46.520947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.810 [2024-11-17 09:09:46.520985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.810 [2024-11-17 09:09:46.535100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.810 [2024-11-17 09:09:46.535141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.810 [2024-11-17 09:09:46.551228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.810 [2024-11-17 09:09:46.551268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.810 [2024-11-17 09:09:46.566378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.810 [2024-11-17 09:09:46.566431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.582001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.582041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.597050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.597089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.612078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.612118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.627521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.627566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.642898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.642938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.657979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.658019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.673937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.673978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.689151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.689190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.704639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.704675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.719891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.719932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.735392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.735444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.748859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.748896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.763585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.763622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.778837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.778877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.794004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.794044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.811 [2024-11-17 09:09:46.809501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.811 [2024-11-17 09:09:46.809537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.824477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.824522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.839838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.839879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.854849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.854889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.869546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.869581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.884587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.884623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 8397.00 IOPS, 65.60 MiB/s [2024-11-17T08:09:47.082Z] [2024-11-17 09:09:46.899853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.899894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.915508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.915545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.929170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.929210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.944934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.944975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.957799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.957839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.972953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.972994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:46.988383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:46.988436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:47.001177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:47.001214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:47.016362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:47.016429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:47.031602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.069 [2024-11-17 09:09:47.031638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.069 [2024-11-17 09:09:47.047262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.070 [2024-11-17 09:09:47.047302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.070 [2024-11-17 09:09:47.063034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.070 [2024-11-17 09:09:47.063073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.070 [2024-11-17 09:09:47.077962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.070 [2024-11-17 09:09:47.078002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.093175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.093215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.106762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.106802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.121973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.122013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.137330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.137380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.153381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.153442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.168585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.168621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.184427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.184474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.200160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.200200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.215331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.215378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.229959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.229999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.245291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.245331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.261054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.261094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.277203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.277243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.292515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.292551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.305623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.305659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.321055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.321095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.328 [2024-11-17 09:09:47.335790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.328 [2024-11-17 09:09:47.335830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.350894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.350934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.365929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.365969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.380941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.380980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.396459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.396502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.410742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.410793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.425425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.425463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.440459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.440511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.455600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.455637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.470688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.470740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.485133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.485173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.500211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.500251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.586 [2024-11-17 09:09:47.515348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.586 [2024-11-17 09:09:47.515414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.587 [2024-11-17 09:09:47.530496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.587 [2024-11-17 09:09:47.530532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.587 [2024-11-17 09:09:47.545496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.587 [2024-11-17 09:09:47.545532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.587 [2024-11-17 09:09:47.559901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.587 [2024-11-17 09:09:47.559941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.587 [2024-11-17 09:09:47.574754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.587 [2024-11-17 09:09:47.574793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.587 [2024-11-17 09:09:47.589560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.587 [2024-11-17 09:09:47.589596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.604837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.604877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.617733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.617773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.633080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.633120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.648622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.648688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.661864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.661903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.677311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.677350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.692507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.692544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.706925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.706965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.721957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.721997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.736874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.736914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.752133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.752182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.767031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.767071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.782844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.782884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.798257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.798296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.813864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.813904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.829139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.829179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.845 [2024-11-17 09:09:47.844551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.845 [2024-11-17 09:09:47.844589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.860578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.860615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.876201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.876242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.891542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.891579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 8371.33 IOPS, 65.40 MiB/s [2024-11-17T08:09:48.116Z] [2024-11-17 09:09:47.905228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.905263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.919511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.919548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.934115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.934152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.948130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.948167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.962657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.962694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.977308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.977345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:47.992002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:47.992053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.006520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.006556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.020776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.020827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.035250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.035311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.049500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.049537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.063699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.063750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.078012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.078059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.092300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.092351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.103 [2024-11-17 09:09:48.106816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.103 [2024-11-17 09:09:48.106867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-17 09:09:48.121649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-17 09:09:48.121686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-17 09:09:48.136355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-17 09:09:48.136413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-17 09:09:48.148716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-17 09:09:48.148769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-17 09:09:48.163264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-11-17 09:09:48.163302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-11-17 09:09:48.178265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.178306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.194137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.194176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.210294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.210334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.225804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.225844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.241392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.241444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.255913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.255965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.270714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.270754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.285615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.285667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.301276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.301316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.316541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.316585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.331904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.331943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.345005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.345044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-11-17 09:09:48.359969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-11-17 09:09:48.360009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.375160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.375200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.390647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.390699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.406397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.406450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.420317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.420353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.435617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.435671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.448905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.448945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.462552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.462587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.477383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.477444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.492218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.492257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.507076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.507115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.521779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.521820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.537701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.537740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.553740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.553780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.569218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.569258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.584948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.584988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.600550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.600586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.619 [2024-11-17 09:09:48.616776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.619 [2024-11-17 09:09:48.616817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.632290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.632330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.648063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.648103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.663168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.663207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.678812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.678851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.694574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.694610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.710218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.710258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.725514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.725550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.740924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.740964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.756154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.756194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.770758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.770798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.786457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.786493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.801635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.801686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.816268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.816308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.831508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.831545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.846556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.846592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.862022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.862061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-11-17 09:09:48.877268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-11-17 09:09:48.877307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-17 09:09:48.892312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-17 09:09:48.892353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 8394.50 IOPS, 65.58 MiB/s [2024-11-17T08:09:49.149Z] [2024-11-17 09:09:48.907683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-17 09:09:48.907736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-17 09:09:48.923308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-17 09:09:48.923347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.136 [2024-11-17 09:09:48.938740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.136 [2024-11-17 09:09:48.938779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:48.951504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:48.951539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:48.965911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:48.965952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:48.981197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:48.981236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:48.996485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:48.996521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.011505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.011542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.026723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.026763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.040525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.040560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.055928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.055968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.071203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.071243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.086520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.086556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.099553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.099601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.114692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.114731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.130019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.130059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.137 [2024-11-17 09:09:49.142720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.137 [2024-11-17 09:09:49.142760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.157559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.157595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.173379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.173443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.188684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.188739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.204240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.204281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.220272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.220312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.235103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.235143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.250560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.250597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.266913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.266954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.282705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.282745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.296305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.296344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.311797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.311845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.326483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.326520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.342092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.342131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.358075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.358114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.373565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.373602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.388971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.389010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.395 [2024-11-17 09:09:49.404764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.395 [2024-11-17 09:09:49.404800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.656 [2024-11-17 09:09:49.419570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.656 [2024-11-17 09:09:49.419606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.434798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.434838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.450254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.450303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.463293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.463332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.478233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.478274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.493043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.493083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.508891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.508931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.524576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.524613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.540076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.540115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.555231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.555270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.571151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.571191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.586090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.586129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.600956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.600996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.616598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.616633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.632017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.632057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.646904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.646944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.657 [2024-11-17 09:09:49.661933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.657 [2024-11-17 09:09:49.661974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.915 [2024-11-17 09:09:49.677000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.915 [2024-11-17 09:09:49.677041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.915 [2024-11-17 09:09:49.692323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.915 [2024-11-17 09:09:49.692363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.915 [2024-11-17 09:09:49.707578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.915 [2024-11-17 09:09:49.707614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.915 [2024-11-17 09:09:49.722634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.915 [2024-11-17 09:09:49.722690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.915 [2024-11-17 09:09:49.737943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.737993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.753099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.753139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.768884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.768924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.785295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.785335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.800611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.800662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.816139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.816178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.829242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.829283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.844445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.844481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.859974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.860014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.875574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.875611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.891608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.891645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 8371.60 IOPS, 65.40 MiB/s [2024-11-17T08:09:49.929Z] [2024-11-17 09:09:49.906983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.907023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 [2024-11-17 09:09:49.918653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.918708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.916 00:09:44.916 Latency(us) 00:09:44.916 [2024-11-17T08:09:49.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.916 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:44.916 Nvme1n1 : 5.02 8371.77 65.40 0.00 0.00 15262.34 5242.88 23981.32 00:09:44.916 [2024-11-17T08:09:49.929Z] =================================================================================================================== 00:09:44.916 [2024-11-17T08:09:49.929Z] Total : 8371.77 65.40 0.00 0.00 15262.34 5242.88 23981.32 00:09:44.916 [2024-11-17 09:09:49.926395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.916 [2024-11-17 09:09:49.926444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-11-17 09:09:49.934416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-11-17 09:09:49.934448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-11-17 09:09:49.942396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.942440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.950450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.950481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.958447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.958476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.966490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.966521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.974630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.974699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.982652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.982722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.990551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.990580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:49.998555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:49.998593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.006562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.006598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.014607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.014640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.022599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.022630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.030649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.030693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.038677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.038708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.046684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.046730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.054739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.054773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.062752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.062788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.070834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.070900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.078911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.078979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.086857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.086906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.094903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.094937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.102867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.102901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.110870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.110902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.118917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.118950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.126938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.126971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.134936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.134968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.142984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.143018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.150984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.151016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.159029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.159062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.167063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.167098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.175083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.175117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.175 [2024-11-17 09:09:50.183100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.175 [2024-11-17 09:09:50.183135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.191123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.191158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.199125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.199159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.207174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.207209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.215174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.215209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.223218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.223253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.231245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.231280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.239386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.239444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.433 [2024-11-17 09:09:50.247428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.433 [2024-11-17 09:09:50.247497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.255312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.255347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.263341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.263388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.271384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.271433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.279352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.279411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.287426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.287455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.295444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.295475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.303545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.303603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.311597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.311658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.319657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.319722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.327538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.327583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.335562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.335592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.347573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.347602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.355589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.355618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.363610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.363638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.371639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.371685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.379677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.379705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.387703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.387737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.395693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.395736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.403762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.403796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.411749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.411778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.419809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.419843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.427867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.427901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.435835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.435868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-11-17 09:09:50.443880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-11-17 09:09:50.443913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.451894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.451927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.459918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.459951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.467947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.467980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.475943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.475975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.484085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.484145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.492099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.492156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.500020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.500052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.508066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.508100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.516084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.516127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.524085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.524117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.532135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.532169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.540130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.540162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.548178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.548212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.556202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.556244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.564223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.564256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.572242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.572275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.580268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.580303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.588269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.588302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.596322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.596356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.604381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.604434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.612465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.612508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.620412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.620440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.628401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.628446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.636451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.636479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.644476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.644505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.652470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.652498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.660536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.660567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.668511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.668539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.676543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.676571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.684555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.684583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.692565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.692595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.693 [2024-11-17 09:09:50.700713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.693 [2024-11-17 09:09:50.700775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.708616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.708669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.716620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.716665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.724683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.724718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.732678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.732721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.740737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.740771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.748732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.748762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.756793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.756828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.764798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.764831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.772812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.772841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.780850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.780883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.788891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.788925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.796897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.796935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.804936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.804969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.812958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.812992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.820958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.820991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 [2024-11-17 09:09:50.829051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.952 [2024-11-17 09:09:50.829085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2876067) - No such process 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2876067 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.952 delay0 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.952 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:46.211 [2024-11-17 09:09:51.007542] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:52.768 Initializing NVMe Controllers 00:09:52.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:52.768 Initialization complete. Launching workers. 00:09:52.768 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:09:52.768 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:09:52.768 success 187, unsuccessful 185, failed 0 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.768 rmmod nvme_tcp 00:09:52.768 rmmod nvme_fabrics 00:09:52.768 rmmod nvme_keyring 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2874584 ']' 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2874584 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2874584 ']' 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2874584 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874584 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874584' 00:09:52.768 killing process with pid 2874584 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2874584 00:09:52.768 09:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2874584 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.704 09:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.609 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.609 00:09:55.609 real 0m31.902s 00:09:55.609 user 0m47.794s 00:09:55.609 sys 0m8.113s 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.610 ************************************ 00:09:55.610 END TEST nvmf_zcopy 00:09:55.610 ************************************ 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.610 ************************************ 00:09:55.610 START TEST nvmf_nmic 00:09:55.610 ************************************ 00:09:55.610 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.870 * Looking for test storage... 00:09:55.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.870 --rc genhtml_branch_coverage=1 00:09:55.870 --rc genhtml_function_coverage=1 00:09:55.870 --rc genhtml_legend=1 00:09:55.870 --rc geninfo_all_blocks=1 00:09:55.870 --rc geninfo_unexecuted_blocks=1 00:09:55.870 00:09:55.870 ' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.870 --rc genhtml_branch_coverage=1 00:09:55.870 --rc genhtml_function_coverage=1 00:09:55.870 --rc genhtml_legend=1 00:09:55.870 --rc geninfo_all_blocks=1 00:09:55.870 --rc geninfo_unexecuted_blocks=1 00:09:55.870 00:09:55.870 ' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.870 --rc genhtml_branch_coverage=1 00:09:55.870 --rc genhtml_function_coverage=1 00:09:55.870 --rc genhtml_legend=1 00:09:55.870 --rc geninfo_all_blocks=1 00:09:55.870 --rc geninfo_unexecuted_blocks=1 00:09:55.870 00:09:55.870 ' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.870 --rc genhtml_branch_coverage=1 00:09:55.870 --rc genhtml_function_coverage=1 00:09:55.870 --rc genhtml_legend=1 00:09:55.870 --rc geninfo_all_blocks=1 00:09:55.870 --rc geninfo_unexecuted_blocks=1 00:09:55.870 00:09:55.870 ' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.870 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.871 09:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.403 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:58.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:58.404 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:58.404 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:58.404 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:09:58.404 00:09:58.404 --- 10.0.0.2 ping statistics --- 00:09:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.404 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:09:58.404 00:09:58.404 --- 10.0.0.1 ping statistics --- 00:09:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.404 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.404 09:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.404 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:58.404 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2879728 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2879728 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2879728 ']' 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.405 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.405 [2024-11-17 09:10:03.102915] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:58.405 [2024-11-17 09:10:03.103074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.405 [2024-11-17 09:10:03.249612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.405 [2024-11-17 09:10:03.390677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.405 [2024-11-17 09:10:03.390747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.405 [2024-11-17 09:10:03.390773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.405 [2024-11-17 09:10:03.390797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.405 [2024-11-17 09:10:03.390816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.405 [2024-11-17 09:10:03.393567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.405 [2024-11-17 09:10:03.393636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.405 [2024-11-17 09:10:03.393670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.405 [2024-11-17 09:10:03.393691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 [2024-11-17 09:10:04.110716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 Malloc0 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 [2024-11-17 09:10:04.230572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:59.338 test case1: single bdev can't be used in multiple subsystems 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 [2024-11-17 09:10:04.254280] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:59.338 [2024-11-17 09:10:04.254319] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:59.338 [2024-11-17 09:10:04.254362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.338 request: 00:09:59.338 { 00:09:59.338 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:59.338 "namespace": { 00:09:59.338 "bdev_name": "Malloc0", 00:09:59.338 "no_auto_visible": false 00:09:59.338 }, 00:09:59.338 "method": "nvmf_subsystem_add_ns", 00:09:59.338 "req_id": 1 00:09:59.338 } 00:09:59.338 Got JSON-RPC error response 00:09:59.338 response: 00:09:59.338 { 00:09:59.338 "code": -32602, 00:09:59.338 "message": "Invalid parameters" 00:09:59.338 } 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:59.338 Adding namespace failed - expected result. 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:59.338 test case2: host connect to nvmf target in multiple paths 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.338 [2024-11-17 09:10:04.262449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.338 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.904 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.838 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.838 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:00.838 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.838 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:00.838 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:02.736 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:02.736 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:02.736 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.736 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:02.737 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.737 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:02.737 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:02.737 [global] 00:10:02.737 thread=1 00:10:02.737 invalidate=1 00:10:02.737 rw=write 00:10:02.737 time_based=1 00:10:02.737 runtime=1 00:10:02.737 ioengine=libaio 00:10:02.737 direct=1 00:10:02.737 bs=4096 00:10:02.737 iodepth=1 00:10:02.737 norandommap=0 00:10:02.737 numjobs=1 00:10:02.737 00:10:02.737 verify_dump=1 00:10:02.737 verify_backlog=512 00:10:02.737 verify_state_save=0 00:10:02.737 do_verify=1 00:10:02.737 verify=crc32c-intel 00:10:02.737 [job0] 00:10:02.737 filename=/dev/nvme0n1 00:10:02.737 Could not set queue depth (nvme0n1) 00:10:02.737 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.737 fio-3.35 00:10:02.737 Starting 1 thread 00:10:04.111 00:10:04.111 job0: (groupid=0, jobs=1): err= 0: pid=2880375: Sun Nov 17 09:10:08 2024 00:10:04.111 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:10:04.111 slat (nsec): min=5727, max=34695, avg=21008.91, stdev=9199.48 00:10:04.111 clat (usec): min=268, max=42073, avg=39431.60, stdev=8759.85 00:10:04.111 lat (usec): min=275, max=42086, avg=39452.60, stdev=8763.23 00:10:04.111 clat percentiles (usec): 00:10:04.111 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:04.111 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:04.111 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:04.111 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:04.111 | 99.99th=[42206] 00:10:04.111 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:04.111 slat (usec): min=6, max=28976, avg=64.07, stdev=1280.29 00:10:04.111 clat (usec): min=150, max=334, avg=192.68, stdev=19.51 00:10:04.111 lat (usec): min=157, max=29200, avg=256.75, stdev=1281.82 00:10:04.111 clat percentiles (usec): 00:10:04.111 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 180], 00:10:04.111 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:10:04.111 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 229], 00:10:04.111 | 99.00th=[ 253], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 334], 00:10:04.111 | 99.99th=[ 334] 00:10:04.111 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:04.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:04.111 lat (usec) : 250=94.76%, 500=1.31% 00:10:04.111 lat (msec) : 50=3.93% 00:10:04.111 cpu : usr=0.20%, sys=0.30%, ctx=537, majf=0, minf=1 00:10:04.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.111 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.111 00:10:04.111 Run status group 0 (all jobs): 00:10:04.111 READ: bw=87.9KiB/s (90.0kB/s), 87.9KiB/s-87.9KiB/s (90.0kB/s-90.0kB/s), io=88.0KiB (90.1kB), run=1001-1001msec 00:10:04.111 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:04.111 00:10:04.111 Disk stats (read/write): 00:10:04.111 nvme0n1: ios=47/512, merge=0/0, ticks=1732/97, in_queue=1829, util=98.60% 00:10:04.111 09:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.380 rmmod nvme_tcp 00:10:04.380 rmmod nvme_fabrics 00:10:04.380 rmmod nvme_keyring 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:04.380 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2879728 ']' 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2879728 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2879728 ']' 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2879728 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2879728 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2879728' 00:10:04.381 killing process with pid 2879728 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2879728 00:10:04.381 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2879728 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.756 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.665 00:10:07.665 real 0m11.967s 00:10:07.665 user 0m28.466s 00:10:07.665 sys 0m2.642s 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.665 ************************************ 00:10:07.665 END TEST nvmf_nmic 00:10:07.665 ************************************ 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.665 ************************************ 00:10:07.665 START TEST nvmf_fio_target 00:10:07.665 ************************************ 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:07.665 * Looking for test storage... 00:10:07.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.665 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.924 --rc genhtml_branch_coverage=1 00:10:07.924 --rc genhtml_function_coverage=1 00:10:07.924 --rc genhtml_legend=1 00:10:07.924 --rc geninfo_all_blocks=1 00:10:07.924 --rc geninfo_unexecuted_blocks=1 00:10:07.924 00:10:07.924 ' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.924 --rc genhtml_branch_coverage=1 00:10:07.924 --rc genhtml_function_coverage=1 00:10:07.924 --rc genhtml_legend=1 00:10:07.924 --rc geninfo_all_blocks=1 00:10:07.924 --rc geninfo_unexecuted_blocks=1 00:10:07.924 00:10:07.924 ' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.924 --rc genhtml_branch_coverage=1 00:10:07.924 --rc genhtml_function_coverage=1 00:10:07.924 --rc genhtml_legend=1 00:10:07.924 --rc geninfo_all_blocks=1 00:10:07.924 --rc geninfo_unexecuted_blocks=1 00:10:07.924 00:10:07.924 ' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.924 --rc genhtml_branch_coverage=1 00:10:07.924 --rc genhtml_function_coverage=1 00:10:07.924 --rc genhtml_legend=1 00:10:07.924 --rc geninfo_all_blocks=1 00:10:07.924 --rc geninfo_unexecuted_blocks=1 00:10:07.924 00:10:07.924 ' 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.924 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.925 09:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.831 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:09.832 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:09.832 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:09.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:09.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.832 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:10:10.091 00:10:10.091 --- 10.0.0.2 ping statistics --- 00:10:10.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.091 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:10.091 00:10:10.091 --- 10.0.0.1 ping statistics --- 00:10:10.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.091 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2882597 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2882597 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2882597 ']' 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.091 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.092 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.092 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.092 09:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.092 [2024-11-17 09:10:15.060221] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:10.092 [2024-11-17 09:10:15.060394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.350 [2024-11-17 09:10:15.208231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.350 [2024-11-17 09:10:15.349400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.350 [2024-11-17 09:10:15.349492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.350 [2024-11-17 09:10:15.349519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.350 [2024-11-17 09:10:15.349544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.350 [2024-11-17 09:10:15.349563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.350 [2024-11-17 09:10:15.352449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.350 [2024-11-17 09:10:15.352510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.350 [2024-11-17 09:10:15.352564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.350 [2024-11-17 09:10:15.352570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.285 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.285 [2024-11-17 09:10:16.278050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.543 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.800 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:11.801 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.059 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:12.059 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.624 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:12.624 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.926 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:12.926 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:13.206 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.463 09:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:13.463 09:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.721 09:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:13.721 09:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.978 09:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:13.978 09:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:14.236 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.493 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:14.493 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.750 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:14.750 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.008 09:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.266 [2024-11-17 09:10:20.266386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.524 09:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:15.782 09:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:16.040 09:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.605 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:16.605 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:16.605 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.605 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:16.605 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:16.605 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:18.514 09:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.514 [global] 00:10:18.514 thread=1 00:10:18.514 invalidate=1 00:10:18.514 rw=write 00:10:18.514 time_based=1 00:10:18.514 runtime=1 00:10:18.514 ioengine=libaio 00:10:18.514 direct=1 00:10:18.514 bs=4096 00:10:18.514 iodepth=1 00:10:18.514 norandommap=0 00:10:18.514 numjobs=1 00:10:18.514 00:10:18.514 verify_dump=1 00:10:18.514 verify_backlog=512 00:10:18.514 verify_state_save=0 00:10:18.514 do_verify=1 00:10:18.514 verify=crc32c-intel 00:10:18.514 [job0] 00:10:18.514 filename=/dev/nvme0n1 00:10:18.514 [job1] 00:10:18.514 filename=/dev/nvme0n2 00:10:18.514 [job2] 00:10:18.514 filename=/dev/nvme0n3 00:10:18.514 [job3] 00:10:18.514 filename=/dev/nvme0n4 00:10:18.514 Could not set queue depth (nvme0n1) 00:10:18.514 Could not set queue depth (nvme0n2) 00:10:18.514 Could not set queue depth (nvme0n3) 00:10:18.514 Could not set queue depth (nvme0n4) 00:10:18.777 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.777 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.777 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.777 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.777 fio-3.35 00:10:18.777 Starting 4 threads 00:10:20.150 00:10:20.150 job0: (groupid=0, jobs=1): err= 0: pid=2883813: Sun Nov 17 09:10:24 2024 00:10:20.150 read: IOPS=1524, BW=6099KiB/s (6246kB/s)(6148KiB/1008msec) 00:10:20.150 slat (nsec): min=5965, max=71507, avg=17292.50, stdev=8629.90 00:10:20.150 clat (usec): min=215, max=40906, avg=317.37, stdev=1108.55 00:10:20.150 lat (usec): min=226, max=40914, avg=334.66, stdev=1108.50 00:10:20.150 clat percentiles (usec): 00:10:20.150 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:10:20.150 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 277], 00:10:20.150 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 371], 95.00th=[ 412], 00:10:20.150 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[15664], 99.95th=[41157], 00:10:20.150 | 99.99th=[41157] 00:10:20.150 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:10:20.150 slat (nsec): min=7092, max=72332, avg=15831.77, stdev=6666.42 00:10:20.150 clat (usec): min=167, max=521, avg=217.05, stdev=41.59 00:10:20.150 lat (usec): min=183, max=531, avg=232.89, stdev=43.32 00:10:20.150 clat percentiles (usec): 00:10:20.150 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:10:20.150 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:10:20.150 | 70.00th=[ 221], 80.00th=[ 241], 90.00th=[ 281], 95.00th=[ 302], 00:10:20.150 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 424], 99.95th=[ 482], 00:10:20.150 | 99.99th=[ 523] 00:10:20.150 bw ( KiB/s): min= 8192, max= 8192, per=51.15%, avg=8192.00, stdev= 0.00, samples=2 00:10:20.150 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:20.150 lat (usec) : 250=63.10%, 500=36.82%, 750=0.03% 00:10:20.150 lat (msec) : 20=0.03%, 50=0.03% 00:10:20.150 cpu : usr=3.48%, sys=5.86%, ctx=3585, majf=0, minf=1 00:10:20.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.150 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.150 job1: (groupid=0, jobs=1): err= 0: pid=2883814: Sun Nov 17 09:10:24 2024 00:10:20.150 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:10:20.150 slat (nsec): min=14964, max=36837, avg=27943.71, stdev=8486.08 00:10:20.150 clat (usec): min=40856, max=41240, avg=40977.74, stdev=91.52 00:10:20.150 lat (usec): min=40875, max=41266, avg=41005.69, stdev=89.38 00:10:20.150 clat percentiles (usec): 00:10:20.150 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:20.150 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.150 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.150 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.150 | 99.99th=[41157] 00:10:20.150 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:20.150 slat (nsec): min=9464, max=56505, avg=21184.46, stdev=7370.99 00:10:20.150 clat (usec): min=181, max=571, avg=258.53, stdev=46.77 00:10:20.150 lat (usec): min=193, max=588, avg=279.72, stdev=47.13 00:10:20.150 clat percentiles (usec): 00:10:20.150 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 229], 00:10:20.150 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:10:20.150 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 326], 00:10:20.150 | 99.00th=[ 465], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 570], 00:10:20.150 | 99.99th=[ 570] 00:10:20.150 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.150 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.151 lat (usec) : 250=43.90%, 500=51.59%, 750=0.56% 00:10:20.151 lat (msec) : 50=3.94% 00:10:20.151 cpu : usr=0.70%, sys=1.39%, ctx=533, majf=0, minf=2 00:10:20.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.151 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.151 job2: (groupid=0, jobs=1): err= 0: pid=2883815: Sun Nov 17 09:10:24 2024 00:10:20.151 read: IOPS=885, BW=3543KiB/s (3628kB/s)(3624KiB/1023msec) 00:10:20.151 slat (nsec): min=6949, max=58453, avg=17247.14, stdev=7510.02 00:10:20.151 clat (usec): min=229, max=42111, avg=745.12, stdev=4302.89 00:10:20.151 lat (usec): min=238, max=42124, avg=762.37, stdev=4304.26 00:10:20.151 clat percentiles (usec): 00:10:20.151 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:10:20.151 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 281], 00:10:20.151 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 355], 00:10:20.151 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:20.151 | 99.99th=[42206] 00:10:20.151 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:10:20.151 slat (usec): min=7, max=40492, avg=76.18, stdev=1394.12 00:10:20.151 clat (usec): min=176, max=449, avg=239.14, stdev=58.67 00:10:20.151 lat (usec): min=186, max=40942, avg=315.32, stdev=1404.01 00:10:20.151 clat percentiles (usec): 00:10:20.151 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:10:20.151 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 221], 00:10:20.151 | 70.00th=[ 247], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 363], 00:10:20.151 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 433], 99.95th=[ 449], 00:10:20.151 | 99.99th=[ 449] 00:10:20.151 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=2 00:10:20.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:20.151 lat (usec) : 250=45.08%, 500=54.25%, 750=0.10% 00:10:20.151 lat (msec) : 20=0.05%, 50=0.52% 00:10:20.151 cpu : usr=1.27%, sys=3.91%, ctx=1933, majf=0, minf=1 00:10:20.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.151 issued rwts: total=906,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.151 job3: (groupid=0, jobs=1): err= 0: pid=2883816: Sun Nov 17 09:10:24 2024 00:10:20.151 read: IOPS=29, BW=119KiB/s (122kB/s)(120KiB/1011msec) 00:10:20.151 slat (nsec): min=8770, max=35543, avg=23432.57, stdev=9637.11 00:10:20.151 clat (usec): min=362, max=42032, avg=28269.52, stdev=18776.89 00:10:20.151 lat (usec): min=385, max=42049, avg=28292.95, stdev=18782.47 00:10:20.151 clat percentiles (usec): 00:10:20.151 | 1.00th=[ 363], 5.00th=[ 388], 10.00th=[ 388], 20.00th=[ 502], 00:10:20.151 | 30.00th=[ 603], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.151 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:20.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.151 | 99.99th=[42206] 00:10:20.151 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:20.151 slat (usec): min=8, max=18751, avg=56.34, stdev=827.89 00:10:20.151 clat (usec): min=199, max=433, avg=254.70, stdev=37.63 00:10:20.151 lat (usec): min=210, max=19132, avg=311.04, stdev=834.34 00:10:20.151 clat percentiles (usec): 00:10:20.151 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:10:20.151 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:10:20.151 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 334], 00:10:20.151 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 433], 99.95th=[ 433], 00:10:20.151 | 99.99th=[ 433] 00:10:20.151 bw ( KiB/s): min= 4096, max= 4096, per=25.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:20.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:20.151 lat (usec) : 250=55.72%, 500=39.67%, 750=0.74% 00:10:20.151 lat (msec) : 50=3.87% 00:10:20.151 cpu : usr=0.40%, sys=1.09%, ctx=544, majf=0, minf=1 00:10:20.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.151 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.151 00:10:20.151 Run status group 0 (all jobs): 00:10:20.151 READ: bw=9752KiB/s (9986kB/s), 83.4KiB/s-6099KiB/s (85.4kB/s-6246kB/s), io=9976KiB (10.2MB), run=1007-1023msec 00:10:20.151 WRITE: bw=15.6MiB/s (16.4MB/s), 2026KiB/s-8127KiB/s (2074kB/s-8322kB/s), io=16.0MiB (16.8MB), run=1007-1023msec 00:10:20.151 00:10:20.151 Disk stats (read/write): 00:10:20.151 nvme0n1: ios=1357/1536, merge=0/0, ticks=436/320, in_queue=756, util=82.36% 00:10:20.151 nvme0n2: ios=66/512, merge=0/0, ticks=733/126, in_queue=859, util=86.51% 00:10:20.151 nvme0n3: ios=535/774, merge=0/0, ticks=1421/188, in_queue=1609, util=95.43% 00:10:20.151 nvme0n4: ios=39/512, merge=0/0, ticks=1526/125, in_queue=1651, util=100.00% 00:10:20.151 09:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:20.151 [global] 00:10:20.151 thread=1 00:10:20.151 invalidate=1 00:10:20.151 rw=randwrite 00:10:20.151 time_based=1 00:10:20.151 runtime=1 00:10:20.151 ioengine=libaio 00:10:20.151 direct=1 00:10:20.151 bs=4096 00:10:20.151 iodepth=1 00:10:20.151 norandommap=0 00:10:20.151 numjobs=1 00:10:20.151 00:10:20.151 verify_dump=1 00:10:20.151 verify_backlog=512 00:10:20.151 verify_state_save=0 00:10:20.151 do_verify=1 00:10:20.151 verify=crc32c-intel 00:10:20.151 [job0] 00:10:20.151 filename=/dev/nvme0n1 00:10:20.151 [job1] 00:10:20.151 filename=/dev/nvme0n2 00:10:20.151 [job2] 00:10:20.151 filename=/dev/nvme0n3 00:10:20.151 [job3] 00:10:20.151 filename=/dev/nvme0n4 00:10:20.151 Could not set queue depth (nvme0n1) 00:10:20.151 Could not set queue depth (nvme0n2) 00:10:20.151 Could not set queue depth (nvme0n3) 00:10:20.151 Could not set queue depth (nvme0n4) 00:10:20.408 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.408 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.408 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.408 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.408 fio-3.35 00:10:20.408 Starting 4 threads 00:10:21.782 00:10:21.782 job0: (groupid=0, jobs=1): err= 0: pid=2884045: Sun Nov 17 09:10:26 2024 00:10:21.782 read: IOPS=1785, BW=7141KiB/s (7312kB/s)(7148KiB/1001msec) 00:10:21.782 slat (nsec): min=5573, max=47174, avg=13381.45, stdev=5106.05 00:10:21.782 clat (usec): min=218, max=555, avg=280.40, stdev=22.25 00:10:21.782 lat (usec): min=225, max=571, avg=293.78, stdev=24.66 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 265], 00:10:21.782 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:21.782 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:10:21.782 | 99.00th=[ 326], 99.50th=[ 375], 99.90th=[ 545], 99.95th=[ 553], 00:10:21.782 | 99.99th=[ 553] 00:10:21.782 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:21.782 slat (nsec): min=7948, max=59184, avg=15672.66, stdev=7052.99 00:10:21.782 clat (usec): min=169, max=406, avg=207.57, stdev=28.99 00:10:21.782 lat (usec): min=178, max=444, avg=223.25, stdev=33.57 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:10:21.782 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 212], 00:10:21.782 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 245], 00:10:21.782 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 400], 99.95th=[ 408], 00:10:21.782 | 99.99th=[ 408] 00:10:21.782 bw ( KiB/s): min= 8192, max= 8192, per=49.90%, avg=8192.00, stdev= 0.00, samples=1 00:10:21.782 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:21.782 lat (usec) : 250=55.78%, 500=44.15%, 750=0.08% 00:10:21.782 cpu : usr=4.70%, sys=7.50%, ctx=3835, majf=0, minf=2 00:10:21.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.782 issued rwts: total=1787,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.782 job1: (groupid=0, jobs=1): err= 0: pid=2884046: Sun Nov 17 09:10:26 2024 00:10:21.782 read: IOPS=130, BW=523KiB/s (536kB/s)(524KiB/1001msec) 00:10:21.782 slat (nsec): min=6091, max=35417, avg=14921.51, stdev=4187.99 00:10:21.782 clat (usec): min=252, max=42011, avg=6363.29, stdev=14325.97 00:10:21.782 lat (usec): min=259, max=42028, avg=6378.21, stdev=14328.40 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 273], 5.00th=[ 429], 10.00th=[ 449], 20.00th=[ 478], 00:10:21.782 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 502], 00:10:21.782 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[41157], 95.00th=[41157], 00:10:21.782 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:21.782 | 99.99th=[42206] 00:10:21.782 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:21.782 slat (nsec): min=8996, max=65055, avg=21428.27, stdev=8416.63 00:10:21.782 clat (usec): min=190, max=567, avg=293.55, stdev=49.47 00:10:21.782 lat (usec): min=209, max=592, avg=314.98, stdev=50.01 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 204], 5.00th=[ 237], 10.00th=[ 255], 20.00th=[ 265], 00:10:21.782 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:21.782 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 404], 00:10:21.782 | 99.00th=[ 469], 99.50th=[ 498], 99.90th=[ 570], 99.95th=[ 570], 00:10:21.782 | 99.99th=[ 570] 00:10:21.782 bw ( KiB/s): min= 4096, max= 4096, per=24.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.782 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.782 lat (usec) : 250=6.07%, 500=84.14%, 750=6.84% 00:10:21.782 lat (msec) : 50=2.95% 00:10:21.782 cpu : usr=0.30%, sys=1.60%, ctx=645, majf=0, minf=1 00:10:21.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.782 issued rwts: total=131,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.782 job2: (groupid=0, jobs=1): err= 0: pid=2884049: Sun Nov 17 09:10:26 2024 00:10:21.782 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:21.782 slat (nsec): min=5990, max=50035, avg=14038.30, stdev=7261.45 00:10:21.782 clat (usec): min=244, max=41390, avg=656.71, stdev=3578.27 00:10:21.782 lat (usec): min=251, max=41409, avg=670.75, stdev=3578.46 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:10:21.782 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:10:21.782 | 70.00th=[ 343], 80.00th=[ 445], 90.00th=[ 498], 95.00th=[ 537], 00:10:21.782 | 99.00th=[ 709], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:21.782 | 99.99th=[41157] 00:10:21.782 write: IOPS=1133, BW=4535KiB/s (4644kB/s)(4540KiB/1001msec); 0 zone resets 00:10:21.782 slat (nsec): min=7311, max=54773, avg=19446.90, stdev=6559.70 00:10:21.782 clat (usec): min=184, max=513, avg=246.83, stdev=31.37 00:10:21.782 lat (usec): min=195, max=535, avg=266.28, stdev=32.96 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:10:21.782 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 255], 00:10:21.782 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:10:21.782 | 99.00th=[ 334], 99.50th=[ 392], 99.90th=[ 441], 99.95th=[ 515], 00:10:21.782 | 99.99th=[ 515] 00:10:21.782 bw ( KiB/s): min= 4096, max= 4096, per=24.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.782 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.782 lat (usec) : 250=29.78%, 500=65.54%, 750=4.21%, 1000=0.09% 00:10:21.782 lat (msec) : 50=0.37% 00:10:21.782 cpu : usr=2.40%, sys=5.30%, ctx=2159, majf=0, minf=1 00:10:21.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.782 issued rwts: total=1024,1135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.782 job3: (groupid=0, jobs=1): err= 0: pid=2884050: Sun Nov 17 09:10:26 2024 00:10:21.782 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:10:21.782 slat (nsec): min=13956, max=34424, avg=18942.38, stdev=5239.24 00:10:21.782 clat (usec): min=40898, max=41100, avg=40979.80, stdev=36.76 00:10:21.782 lat (usec): min=40932, max=41115, avg=40998.74, stdev=33.88 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:21.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:21.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:21.782 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:21.782 | 99.99th=[41157] 00:10:21.782 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:10:21.782 slat (nsec): min=7544, max=73694, avg=20653.47, stdev=9127.52 00:10:21.782 clat (usec): min=198, max=554, avg=292.81, stdev=49.67 00:10:21.782 lat (usec): min=233, max=569, avg=313.47, stdev=49.45 00:10:21.782 clat percentiles (usec): 00:10:21.782 | 1.00th=[ 210], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 260], 00:10:21.782 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 289], 00:10:21.782 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 392], 00:10:21.782 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 553], 99.95th=[ 553], 00:10:21.782 | 99.99th=[ 553] 00:10:21.782 bw ( KiB/s): min= 4096, max= 4096, per=24.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.782 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.782 lat (usec) : 250=13.88%, 500=81.99%, 750=0.19% 00:10:21.782 lat (msec) : 50=3.94% 00:10:21.782 cpu : usr=0.10%, sys=1.46%, ctx=533, majf=0, minf=1 00:10:21.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.783 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.783 00:10:21.783 Run status group 0 (all jobs): 00:10:21.783 READ: bw=11.3MiB/s (11.8MB/s), 82.0KiB/s-7141KiB/s (83.9kB/s-7312kB/s), io=11.6MiB (12.1MB), run=1001-1025msec 00:10:21.783 WRITE: bw=16.0MiB/s (16.8MB/s), 1998KiB/s-8184KiB/s (2046kB/s-8380kB/s), io=16.4MiB (17.2MB), run=1001-1025msec 00:10:21.783 00:10:21.783 Disk stats (read/write): 00:10:21.783 nvme0n1: ios=1586/1645, merge=0/0, ticks=445/313, in_queue=758, util=87.78% 00:10:21.783 nvme0n2: ios=59/512, merge=0/0, ticks=1866/144, in_queue=2010, util=97.66% 00:10:21.783 nvme0n3: ios=683/1024, merge=0/0, ticks=582/217, in_queue=799, util=91.25% 00:10:21.783 nvme0n4: ios=73/512, merge=0/0, ticks=752/150, in_queue=902, util=96.01% 00:10:21.783 09:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:21.783 [global] 00:10:21.783 thread=1 00:10:21.783 invalidate=1 00:10:21.783 rw=write 00:10:21.783 time_based=1 00:10:21.783 runtime=1 00:10:21.783 ioengine=libaio 00:10:21.783 direct=1 00:10:21.783 bs=4096 00:10:21.783 iodepth=128 00:10:21.783 norandommap=0 00:10:21.783 numjobs=1 00:10:21.783 00:10:21.783 verify_dump=1 00:10:21.783 verify_backlog=512 00:10:21.783 verify_state_save=0 00:10:21.783 do_verify=1 00:10:21.783 verify=crc32c-intel 00:10:21.783 [job0] 00:10:21.783 filename=/dev/nvme0n1 00:10:21.783 [job1] 00:10:21.783 filename=/dev/nvme0n2 00:10:21.783 [job2] 00:10:21.783 filename=/dev/nvme0n3 00:10:21.783 [job3] 00:10:21.783 filename=/dev/nvme0n4 00:10:21.783 Could not set queue depth (nvme0n1) 00:10:21.783 Could not set queue depth (nvme0n2) 00:10:21.783 Could not set queue depth (nvme0n3) 00:10:21.783 Could not set queue depth (nvme0n4) 00:10:21.783 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.783 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.783 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.783 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.783 fio-3.35 00:10:21.783 Starting 4 threads 00:10:23.158 00:10:23.158 job0: (groupid=0, jobs=1): err= 0: pid=2884275: Sun Nov 17 09:10:27 2024 00:10:23.158 read: IOPS=2277, BW=9109KiB/s (9328kB/s)(9200KiB/1010msec) 00:10:23.158 slat (usec): min=2, max=14007, avg=146.14, stdev=950.65 00:10:23.158 clat (usec): min=4326, max=39581, avg=16869.02, stdev=5910.40 00:10:23.158 lat (usec): min=6976, max=39586, avg=17015.16, stdev=5980.21 00:10:23.158 clat percentiles (usec): 00:10:23.158 | 1.00th=[ 7373], 5.00th=[11994], 10.00th=[13566], 20.00th=[14091], 00:10:23.158 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14877], 00:10:23.158 | 70.00th=[16450], 80.00th=[19268], 90.00th=[24511], 95.00th=[31065], 00:10:23.158 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:10:23.158 | 99.99th=[39584] 00:10:23.158 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:10:23.158 slat (usec): min=3, max=31777, avg=253.51, stdev=1435.80 00:10:23.158 clat (usec): min=3738, max=97353, avg=34757.63, stdev=23100.64 00:10:23.158 lat (usec): min=3745, max=97364, avg=35011.14, stdev=23213.25 00:10:23.158 clat percentiles (usec): 00:10:23.158 | 1.00th=[ 5211], 5.00th=[10159], 10.00th=[13173], 20.00th=[13698], 00:10:23.158 | 30.00th=[20055], 40.00th=[26084], 50.00th=[27919], 60.00th=[28705], 00:10:23.158 | 70.00th=[41681], 80.00th=[55313], 90.00th=[77071], 95.00th=[80217], 00:10:23.158 | 99.00th=[93848], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:10:23.158 | 99.99th=[96994] 00:10:23.158 bw ( KiB/s): min= 9424, max=11056, per=18.47%, avg=10240.00, stdev=1154.00, samples=2 00:10:23.158 iops : min= 2356, max= 2764, avg=2560.00, stdev=288.50, samples=2 00:10:23.158 lat (msec) : 4=0.25%, 10=3.81%, 20=50.64%, 50=33.72%, 100=11.58% 00:10:23.158 cpu : usr=2.48%, sys=3.57%, ctx=258, majf=0, minf=2 00:10:23.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:23.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.158 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.158 job1: (groupid=0, jobs=1): err= 0: pid=2884276: Sun Nov 17 09:10:27 2024 00:10:23.158 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:10:23.158 slat (usec): min=3, max=6393, avg=102.56, stdev=598.74 00:10:23.158 clat (usec): min=8267, max=20117, avg=13053.95, stdev=1618.87 00:10:23.158 lat (usec): min=8278, max=20137, avg=13156.51, stdev=1694.46 00:10:23.158 clat percentiles (usec): 00:10:23.158 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[11469], 20.00th=[12256], 00:10:23.158 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:23.158 | 70.00th=[13304], 80.00th=[13698], 90.00th=[15008], 95.00th=[16188], 00:10:23.158 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20055], 99.95th=[20055], 00:10:23.158 | 99.99th=[20055] 00:10:23.158 write: IOPS=5035, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1007msec); 0 zone resets 00:10:23.158 slat (usec): min=4, max=5973, avg=91.75, stdev=416.50 00:10:23.158 clat (usec): min=6062, max=20188, avg=13213.56, stdev=1761.90 00:10:23.158 lat (usec): min=6906, max=20200, avg=13305.31, stdev=1784.50 00:10:23.158 clat percentiles (usec): 00:10:23.158 | 1.00th=[ 7832], 5.00th=[10814], 10.00th=[11338], 20.00th=[12125], 00:10:23.158 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[13566], 00:10:23.158 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[16057], 00:10:23.158 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:10:23.158 | 99.99th=[20317] 00:10:23.158 bw ( KiB/s): min=19072, max=20480, per=35.66%, avg=19776.00, stdev=995.61, samples=2 00:10:23.158 iops : min= 4768, max= 5120, avg=4944.00, stdev=248.90, samples=2 00:10:23.158 lat (msec) : 10=3.66%, 20=96.21%, 50=0.13% 00:10:23.158 cpu : usr=7.46%, sys=12.72%, ctx=508, majf=0, minf=1 00:10:23.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:23.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.158 issued rwts: total=4608,5071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.158 job2: (groupid=0, jobs=1): err= 0: pid=2884277: Sun Nov 17 09:10:27 2024 00:10:23.159 read: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1007msec) 00:10:23.159 slat (usec): min=3, max=17529, avg=129.13, stdev=918.88 00:10:23.159 clat (usec): min=3085, max=59900, avg=15807.38, stdev=4601.32 00:10:23.159 lat (usec): min=3196, max=59912, avg=15936.52, stdev=4684.39 00:10:23.159 clat percentiles (usec): 00:10:23.159 | 1.00th=[ 7046], 5.00th=[10552], 10.00th=[11863], 20.00th=[13304], 00:10:23.159 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:10:23.159 | 70.00th=[16319], 80.00th=[17957], 90.00th=[21103], 95.00th=[25035], 00:10:23.159 | 99.00th=[28443], 99.50th=[42206], 99.90th=[52167], 99.95th=[60031], 00:10:23.159 | 99.99th=[60031] 00:10:23.159 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:23.159 slat (usec): min=4, max=17461, avg=103.44, stdev=686.91 00:10:23.159 clat (usec): min=4639, max=73462, avg=15617.97, stdev=8365.50 00:10:23.159 lat (usec): min=4653, max=73482, avg=15721.41, stdev=8413.42 00:10:23.159 clat percentiles (usec): 00:10:23.159 | 1.00th=[ 5669], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[13304], 00:10:23.159 | 30.00th=[13960], 40.00th=[14615], 50.00th=[14877], 60.00th=[15401], 00:10:23.159 | 70.00th=[15533], 80.00th=[16057], 90.00th=[17171], 95.00th=[19792], 00:10:23.159 | 99.00th=[62129], 99.50th=[67634], 99.90th=[69731], 99.95th=[73925], 00:10:23.159 | 99.99th=[73925] 00:10:23.159 bw ( KiB/s): min=16384, max=16384, per=29.54%, avg=16384.00, stdev= 0.00, samples=2 00:10:23.159 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:23.159 lat (msec) : 4=0.02%, 10=7.77%, 20=83.42%, 50=7.53%, 100=1.26% 00:10:23.159 cpu : usr=5.57%, sys=11.03%, ctx=396, majf=0, minf=1 00:10:23.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:23.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.159 issued rwts: total=4023,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.159 job3: (groupid=0, jobs=1): err= 0: pid=2884278: Sun Nov 17 09:10:27 2024 00:10:23.159 read: IOPS=2007, BW=8031KiB/s (8224kB/s)(8192KiB/1020msec) 00:10:23.159 slat (usec): min=3, max=17277, avg=171.61, stdev=1132.01 00:10:23.159 clat (usec): min=6863, max=43521, avg=20283.10, stdev=6305.80 00:10:23.159 lat (usec): min=6880, max=43538, avg=20454.71, stdev=6388.18 00:10:23.159 clat percentiles (usec): 00:10:23.159 | 1.00th=[ 9765], 5.00th=[13829], 10.00th=[15401], 20.00th=[15926], 00:10:23.159 | 30.00th=[16450], 40.00th=[17957], 50.00th=[18220], 60.00th=[20841], 00:10:23.159 | 70.00th=[21365], 80.00th=[23200], 90.00th=[29230], 95.00th=[35390], 00:10:23.159 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:10:23.159 | 99.99th=[43779] 00:10:23.159 write: IOPS=2366, BW=9467KiB/s (9694kB/s)(9656KiB/1020msec); 0 zone resets 00:10:23.159 slat (usec): min=4, max=13790, avg=257.73, stdev=1218.22 00:10:23.159 clat (usec): min=1314, max=138571, avg=36376.90, stdev=26731.41 00:10:23.159 lat (usec): min=1328, max=138581, avg=36634.63, stdev=26884.12 00:10:23.159 clat percentiles (msec): 00:10:23.159 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 17], 00:10:23.159 | 30.00th=[ 22], 40.00th=[ 27], 50.00th=[ 29], 60.00th=[ 29], 00:10:23.159 | 70.00th=[ 39], 80.00th=[ 54], 90.00th=[ 71], 95.00th=[ 95], 00:10:23.159 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:10:23.159 | 99.99th=[ 140] 00:10:23.159 bw ( KiB/s): min= 8769, max= 9536, per=16.50%, avg=9152.50, stdev=542.35, samples=2 00:10:23.159 iops : min= 2192, max= 2384, avg=2288.00, stdev=135.76, samples=2 00:10:23.159 lat (msec) : 2=0.04%, 10=2.60%, 20=39.96%, 50=45.43%, 100=9.48% 00:10:23.159 lat (msec) : 250=2.49% 00:10:23.159 cpu : usr=3.53%, sys=5.40%, ctx=261, majf=0, minf=2 00:10:23.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:23.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.159 issued rwts: total=2048,2414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.159 00:10:23.159 Run status group 0 (all jobs): 00:10:23.159 READ: bw=49.7MiB/s (52.1MB/s), 8031KiB/s-17.9MiB/s (8224kB/s-18.7MB/s), io=50.7MiB (53.2MB), run=1007-1020msec 00:10:23.159 WRITE: bw=54.2MiB/s (56.8MB/s), 9467KiB/s-19.7MiB/s (9694kB/s-20.6MB/s), io=55.2MiB (57.9MB), run=1007-1020msec 00:10:23.159 00:10:23.159 Disk stats (read/write): 00:10:23.159 nvme0n1: ios=2098/2215, merge=0/0, ticks=31090/55535, in_queue=86625, util=87.17% 00:10:23.159 nvme0n2: ios=4113/4127, merge=0/0, ticks=26344/24457, in_queue=50801, util=97.26% 00:10:23.159 nvme0n3: ios=3233/3584, merge=0/0, ticks=47185/49272, in_queue=96457, util=100.00% 00:10:23.159 nvme0n4: ios=1536/1991, merge=0/0, ticks=30513/73545, in_queue=104058, util=89.71% 00:10:23.159 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:23.159 [global] 00:10:23.159 thread=1 00:10:23.159 invalidate=1 00:10:23.159 rw=randwrite 00:10:23.159 time_based=1 00:10:23.159 runtime=1 00:10:23.159 ioengine=libaio 00:10:23.159 direct=1 00:10:23.159 bs=4096 00:10:23.159 iodepth=128 00:10:23.159 norandommap=0 00:10:23.159 numjobs=1 00:10:23.159 00:10:23.159 verify_dump=1 00:10:23.159 verify_backlog=512 00:10:23.159 verify_state_save=0 00:10:23.159 do_verify=1 00:10:23.159 verify=crc32c-intel 00:10:23.159 [job0] 00:10:23.159 filename=/dev/nvme0n1 00:10:23.159 [job1] 00:10:23.159 filename=/dev/nvme0n2 00:10:23.159 [job2] 00:10:23.159 filename=/dev/nvme0n3 00:10:23.159 [job3] 00:10:23.159 filename=/dev/nvme0n4 00:10:23.159 Could not set queue depth (nvme0n1) 00:10:23.159 Could not set queue depth (nvme0n2) 00:10:23.159 Could not set queue depth (nvme0n3) 00:10:23.159 Could not set queue depth (nvme0n4) 00:10:23.159 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.159 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.159 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.159 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.159 fio-3.35 00:10:23.159 Starting 4 threads 00:10:24.532 00:10:24.532 job0: (groupid=0, jobs=1): err= 0: pid=2884628: Sun Nov 17 09:10:29 2024 00:10:24.532 read: IOPS=3903, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1003msec) 00:10:24.532 slat (usec): min=2, max=23537, avg=134.55, stdev=970.90 00:10:24.532 clat (usec): min=1630, max=51834, avg=16516.86, stdev=6079.52 00:10:24.532 lat (usec): min=4213, max=51876, avg=16651.41, stdev=6140.61 00:10:24.532 clat percentiles (usec): 00:10:24.532 | 1.00th=[ 5080], 5.00th=[ 9503], 10.00th=[11207], 20.00th=[12125], 00:10:24.532 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13829], 60.00th=[16319], 00:10:24.532 | 70.00th=[19006], 80.00th=[20579], 90.00th=[26608], 95.00th=[29492], 00:10:24.532 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32900], 99.95th=[46924], 00:10:24.532 | 99.99th=[51643] 00:10:24.532 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:24.532 slat (usec): min=3, max=26306, avg=106.05, stdev=917.90 00:10:24.532 clat (usec): min=3718, max=54583, avg=15250.94, stdev=5937.95 00:10:24.532 lat (usec): min=3726, max=54633, avg=15356.99, stdev=6042.92 00:10:24.532 clat percentiles (usec): 00:10:24.532 | 1.00th=[ 4686], 5.00th=[ 7111], 10.00th=[ 9241], 20.00th=[12256], 00:10:24.532 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14353], 00:10:24.532 | 70.00th=[14484], 80.00th=[16581], 90.00th=[27919], 95.00th=[28181], 00:10:24.532 | 99.00th=[29230], 99.50th=[29492], 99.90th=[31851], 99.95th=[53740], 00:10:24.532 | 99.99th=[54789] 00:10:24.532 bw ( KiB/s): min=16368, max=16400, per=29.52%, avg=16384.00, stdev=22.63, samples=2 00:10:24.532 iops : min= 4092, max= 4100, avg=4096.00, stdev= 5.66, samples=2 00:10:24.532 lat (msec) : 2=0.01%, 4=0.17%, 10=8.30%, 20=70.63%, 50=20.82% 00:10:24.532 lat (msec) : 100=0.06% 00:10:24.532 cpu : usr=4.69%, sys=7.98%, ctx=423, majf=0, minf=1 00:10:24.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:24.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.532 issued rwts: total=3915,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.532 job1: (groupid=0, jobs=1): err= 0: pid=2884629: Sun Nov 17 09:10:29 2024 00:10:24.532 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:24.532 slat (usec): min=3, max=21094, avg=138.08, stdev=920.61 00:10:24.532 clat (usec): min=1261, max=66971, avg=18053.84, stdev=11067.20 00:10:24.532 lat (usec): min=1274, max=66994, avg=18191.92, stdev=11119.03 00:10:24.532 clat percentiles (usec): 00:10:24.532 | 1.00th=[ 6128], 5.00th=[10683], 10.00th=[11600], 20.00th=[12518], 00:10:24.532 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14353], 60.00th=[15270], 00:10:24.532 | 70.00th=[16188], 80.00th=[19268], 90.00th=[29492], 95.00th=[44303], 00:10:24.532 | 99.00th=[63177], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:10:24.532 | 99.99th=[66847] 00:10:24.532 write: IOPS=4099, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1006msec); 0 zone resets 00:10:24.532 slat (usec): min=4, max=5919, avg=93.85, stdev=436.88 00:10:24.532 clat (usec): min=1521, max=33816, avg=12993.76, stdev=2956.97 00:10:24.532 lat (usec): min=1534, max=33822, avg=13087.62, stdev=2965.94 00:10:24.532 clat percentiles (usec): 00:10:24.532 | 1.00th=[ 4359], 5.00th=[ 6980], 10.00th=[ 9503], 20.00th=[11600], 00:10:24.532 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:10:24.532 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15139], 95.00th=[15401], 00:10:24.532 | 99.00th=[24511], 99.50th=[26084], 99.90th=[33162], 99.95th=[33162], 00:10:24.532 | 99.99th=[33817] 00:10:24.532 bw ( KiB/s): min=16384, max=16384, per=29.52%, avg=16384.00, stdev= 0.00, samples=2 00:10:24.532 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:24.532 lat (msec) : 2=0.28%, 4=0.30%, 10=6.13%, 20=82.92%, 50=8.38% 00:10:24.532 lat (msec) : 100=1.98% 00:10:24.532 cpu : usr=4.88%, sys=8.96%, ctx=467, majf=0, minf=1 00:10:24.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:24.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.532 issued rwts: total=4096,4124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.532 job2: (groupid=0, jobs=1): err= 0: pid=2884630: Sun Nov 17 09:10:29 2024 00:10:24.532 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:24.532 slat (usec): min=2, max=16476, avg=161.76, stdev=965.38 00:10:24.532 clat (usec): min=9527, max=45294, avg=19995.86, stdev=5227.69 00:10:24.532 lat (usec): min=9538, max=45317, avg=20157.61, stdev=5292.83 00:10:24.532 clat percentiles (usec): 00:10:24.532 | 1.00th=[10814], 5.00th=[13304], 10.00th=[14615], 20.00th=[15270], 00:10:24.532 | 30.00th=[16319], 40.00th=[18220], 50.00th=[19006], 60.00th=[20841], 00:10:24.532 | 70.00th=[22414], 80.00th=[23200], 90.00th=[27132], 95.00th=[31065], 00:10:24.533 | 99.00th=[34341], 99.50th=[35390], 99.90th=[38011], 99.95th=[43779], 00:10:24.533 | 99.99th=[45351] 00:10:24.533 write: IOPS=3167, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1003msec); 0 zone resets 00:10:24.533 slat (usec): min=4, max=13430, avg=147.17, stdev=1049.38 00:10:24.533 clat (usec): min=735, max=43736, avg=20720.80, stdev=8225.92 00:10:24.533 lat (usec): min=5646, max=50228, avg=20867.97, stdev=8295.87 00:10:24.533 clat percentiles (usec): 00:10:24.533 | 1.00th=[ 7832], 5.00th=[11994], 10.00th=[13829], 20.00th=[15270], 00:10:24.533 | 30.00th=[15795], 40.00th=[16909], 50.00th=[17957], 60.00th=[19006], 00:10:24.533 | 70.00th=[22414], 80.00th=[25035], 90.00th=[34341], 95.00th=[40633], 00:10:24.533 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:24.533 | 99.99th=[43779] 00:10:24.533 bw ( KiB/s): min=10592, max=14004, per=22.16%, avg=12298.00, stdev=2412.65, samples=2 00:10:24.533 iops : min= 2648, max= 3503, avg=3075.50, stdev=604.58, samples=2 00:10:24.533 lat (usec) : 750=0.02% 00:10:24.533 lat (msec) : 10=2.59%, 20=57.40%, 50=39.99% 00:10:24.533 cpu : usr=4.59%, sys=5.49%, ctx=235, majf=0, minf=1 00:10:24.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:24.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.533 issued rwts: total=3072,3177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.533 job3: (groupid=0, jobs=1): err= 0: pid=2884631: Sun Nov 17 09:10:29 2024 00:10:24.533 read: IOPS=2443, BW=9775KiB/s (10.0MB/s)(9824KiB/1005msec) 00:10:24.533 slat (usec): min=2, max=26228, avg=214.38, stdev=1529.38 00:10:24.533 clat (usec): min=3301, max=76147, avg=25871.62, stdev=14925.16 00:10:24.533 lat (usec): min=6559, max=76151, avg=26086.01, stdev=15014.16 00:10:24.533 clat percentiles (usec): 00:10:24.533 | 1.00th=[ 6783], 5.00th=[13173], 10.00th=[15664], 20.00th=[16712], 00:10:24.533 | 30.00th=[17695], 40.00th=[17695], 50.00th=[18220], 60.00th=[20841], 00:10:24.533 | 70.00th=[27657], 80.00th=[32900], 90.00th=[55313], 95.00th=[62653], 00:10:24.533 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:10:24.533 | 99.99th=[76022] 00:10:24.533 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:10:24.533 slat (usec): min=3, max=23758, avg=179.23, stdev=1325.01 00:10:24.533 clat (usec): min=5871, max=70371, avg=24931.70, stdev=10684.71 00:10:24.533 lat (usec): min=9092, max=70406, avg=25110.93, stdev=10749.45 00:10:24.533 clat percentiles (usec): 00:10:24.533 | 1.00th=[12518], 5.00th=[14353], 10.00th=[15270], 20.00th=[16188], 00:10:24.533 | 30.00th=[17433], 40.00th=[18744], 50.00th=[21103], 60.00th=[27395], 00:10:24.533 | 70.00th=[28181], 80.00th=[33817], 90.00th=[39060], 95.00th=[45876], 00:10:24.533 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:10:24.533 | 99.99th=[70779] 00:10:24.533 bw ( KiB/s): min= 9224, max=11256, per=18.45%, avg=10240.00, stdev=1436.84, samples=2 00:10:24.533 iops : min= 2306, max= 2814, avg=2560.00, stdev=359.21, samples=2 00:10:24.533 lat (msec) : 4=0.02%, 10=0.90%, 20=50.58%, 50=41.55%, 100=6.96% 00:10:24.533 cpu : usr=1.39%, sys=3.39%, ctx=152, majf=0, minf=1 00:10:24.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:24.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.533 issued rwts: total=2456,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.533 00:10:24.533 Run status group 0 (all jobs): 00:10:24.533 READ: bw=52.6MiB/s (55.1MB/s), 9775KiB/s-15.9MiB/s (10.0MB/s-16.7MB/s), io=52.9MiB (55.5MB), run=1003-1006msec 00:10:24.533 WRITE: bw=54.2MiB/s (56.8MB/s), 9.95MiB/s-16.0MiB/s (10.4MB/s-16.8MB/s), io=54.5MiB (57.2MB), run=1003-1006msec 00:10:24.533 00:10:24.533 Disk stats (read/write): 00:10:24.533 nvme0n1: ios=3186/3584, merge=0/0, ticks=49840/52816, in_queue=102656, util=98.30% 00:10:24.533 nvme0n2: ios=3635/3696, merge=0/0, ticks=31379/29836, in_queue=61215, util=98.17% 00:10:24.533 nvme0n3: ios=2615/2881, merge=0/0, ticks=27376/31342, in_queue=58718, util=98.13% 00:10:24.533 nvme0n4: ios=1923/2048, merge=0/0, ticks=25854/24577, in_queue=50431, util=96.02% 00:10:24.533 09:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:24.533 09:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2884767 00:10:24.533 09:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:24.533 09:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:24.533 [global] 00:10:24.533 thread=1 00:10:24.533 invalidate=1 00:10:24.533 rw=read 00:10:24.533 time_based=1 00:10:24.533 runtime=10 00:10:24.533 ioengine=libaio 00:10:24.533 direct=1 00:10:24.533 bs=4096 00:10:24.533 iodepth=1 00:10:24.533 norandommap=1 00:10:24.533 numjobs=1 00:10:24.533 00:10:24.533 [job0] 00:10:24.533 filename=/dev/nvme0n1 00:10:24.533 [job1] 00:10:24.533 filename=/dev/nvme0n2 00:10:24.533 [job2] 00:10:24.533 filename=/dev/nvme0n3 00:10:24.533 [job3] 00:10:24.533 filename=/dev/nvme0n4 00:10:24.533 Could not set queue depth (nvme0n1) 00:10:24.533 Could not set queue depth (nvme0n2) 00:10:24.533 Could not set queue depth (nvme0n3) 00:10:24.533 Could not set queue depth (nvme0n4) 00:10:24.791 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.791 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.791 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.791 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.791 fio-3.35 00:10:24.791 Starting 4 threads 00:10:28.072 09:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:28.072 09:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:28.072 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=25473024, buflen=4096 00:10:28.072 fio: pid=2884858, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.072 09:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.072 09:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:28.072 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4132864, buflen=4096 00:10:28.072 fio: pid=2884857, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.329 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=23474176, buflen=4096 00:10:28.329 fio: pid=2884855, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.329 09:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.330 09:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:28.587 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44130304, buflen=4096 00:10:28.587 fio: pid=2884856, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:28.845 00:10:28.845 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2884855: Sun Nov 17 09:10:33 2024 00:10:28.845 read: IOPS=1630, BW=6522KiB/s (6678kB/s)(22.4MiB/3515msec) 00:10:28.845 slat (usec): min=5, max=24947, avg=17.16, stdev=345.39 00:10:28.845 clat (usec): min=215, max=41350, avg=588.96, stdev=3553.35 00:10:28.845 lat (usec): min=221, max=48967, avg=606.12, stdev=3585.81 00:10:28.845 clat percentiles (usec): 00:10:28.845 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:10:28.845 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:10:28.845 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 371], 00:10:28.845 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:28.845 | 99.99th=[41157] 00:10:28.845 bw ( KiB/s): min= 184, max=14288, per=29.82%, avg=7381.33, stdev=6016.20, samples=6 00:10:28.845 iops : min= 46, max= 3572, avg=1845.33, stdev=1504.05, samples=6 00:10:28.845 lat (usec) : 250=27.30%, 500=71.18%, 750=0.61%, 1000=0.05% 00:10:28.845 lat (msec) : 2=0.05%, 4=0.02%, 50=0.77% 00:10:28.845 cpu : usr=1.22%, sys=2.93%, ctx=5736, majf=0, minf=1 00:10:28.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.845 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.845 issued rwts: total=5732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.845 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2884856: Sun Nov 17 09:10:33 2024 00:10:28.845 read: IOPS=2809, BW=11.0MiB/s (11.5MB/s)(42.1MiB/3835msec) 00:10:28.845 slat (usec): min=5, max=28536, avg=18.69, stdev=380.26 00:10:28.845 clat (usec): min=210, max=2307, avg=332.13, stdev=79.09 00:10:28.845 lat (usec): min=216, max=28915, avg=350.82, stdev=389.65 00:10:28.845 clat percentiles (usec): 00:10:28.845 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:10:28.845 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 351], 60.00th=[ 363], 00:10:28.845 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 437], 00:10:28.845 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 668], 99.95th=[ 685], 00:10:28.845 | 99.99th=[ 1598] 00:10:28.845 bw ( KiB/s): min=10024, max=13066, per=44.40%, avg=10991.14, stdev=1117.55, samples=7 00:10:28.845 iops : min= 2506, max= 3266, avg=2747.71, stdev=279.23, samples=7 00:10:28.845 lat (usec) : 250=18.78%, 500=78.00%, 750=3.19% 00:10:28.845 lat (msec) : 2=0.01%, 4=0.01% 00:10:28.845 cpu : usr=2.35%, sys=4.43%, ctx=10783, majf=0, minf=2 00:10:28.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.845 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.845 issued rwts: total=10775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.845 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2884857: Sun Nov 17 09:10:33 2024 00:10:28.845 read: IOPS=313, BW=1254KiB/s (1284kB/s)(4036KiB/3218msec) 00:10:28.845 slat (nsec): min=4540, max=46302, avg=14108.68, stdev=8634.73 00:10:28.845 clat (usec): min=226, max=41876, avg=3148.41, stdev=10414.49 00:10:28.845 lat (usec): min=232, max=41890, avg=3162.50, stdev=10415.75 00:10:28.845 clat percentiles (usec): 00:10:28.845 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 251], 00:10:28.845 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:10:28.845 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[41157], 00:10:28.845 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:10:28.845 | 99.99th=[41681] 00:10:28.845 bw ( KiB/s): min= 96, max= 3392, per=5.41%, avg=1338.67, stdev=1533.14, samples=6 00:10:28.845 iops : min= 24, max= 848, avg=334.67, stdev=383.29, samples=6 00:10:28.845 lat (usec) : 250=17.92%, 500=74.85%, 750=0.10% 00:10:28.845 lat (msec) : 50=7.03% 00:10:28.845 cpu : usr=0.19%, sys=0.47%, ctx=1011, majf=0, minf=1 00:10:28.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.845 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.845 issued rwts: total=1010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.845 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2884858: Sun Nov 17 09:10:33 2024 00:10:28.845 read: IOPS=2129, BW=8516KiB/s (8721kB/s)(24.3MiB/2921msec) 00:10:28.845 slat (nsec): min=5853, max=66041, avg=13037.59, stdev=6529.81 00:10:28.845 clat (usec): min=245, max=41288, avg=449.36, stdev=1648.58 00:10:28.845 lat (usec): min=252, max=41303, avg=462.40, stdev=1648.90 00:10:28.845 clat percentiles (usec): 00:10:28.845 | 1.00th=[ 285], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 351], 00:10:28.845 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:10:28.846 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 465], 00:10:28.846 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[41157], 99.95th=[41157], 00:10:28.846 | 99.99th=[41157] 00:10:28.846 bw ( KiB/s): min= 2136, max=10432, per=33.78%, avg=8361.60, stdev=3528.67, samples=5 00:10:28.846 iops : min= 534, max= 2608, avg=2090.40, stdev=882.17, samples=5 00:10:28.846 lat (usec) : 250=0.11%, 500=97.27%, 750=2.38%, 1000=0.02% 00:10:28.846 lat (msec) : 2=0.03%, 50=0.18% 00:10:28.846 cpu : usr=1.64%, sys=4.42%, ctx=6222, majf=0, minf=2 00:10:28.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.846 issued rwts: total=6220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.846 00:10:28.846 Run status group 0 (all jobs): 00:10:28.846 READ: bw=24.2MiB/s (25.3MB/s), 1254KiB/s-11.0MiB/s (1284kB/s-11.5MB/s), io=92.7MiB (97.2MB), run=2921-3835msec 00:10:28.846 00:10:28.846 Disk stats (read/write): 00:10:28.846 nvme0n1: ios=5712/0, merge=0/0, ticks=4217/0, in_queue=4217, util=99.20% 00:10:28.846 nvme0n2: ios=9944/0, merge=0/0, ticks=3250/0, in_queue=3250, util=94.40% 00:10:28.846 nvme0n3: ios=1006/0, merge=0/0, ticks=3045/0, in_queue=3045, util=96.79% 00:10:28.846 nvme0n4: ios=6155/0, merge=0/0, ticks=3211/0, in_queue=3211, util=100.00% 00:10:28.846 09:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:28.846 09:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:29.103 09:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.103 09:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:29.361 09:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.361 09:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:29.619 09:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.619 09:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:30.185 09:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.185 09:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:30.443 09:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:30.443 09:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2884767 00:10:30.443 09:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:30.443 09:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:31.375 nvmf hotplug test: fio failed as expected 00:10:31.375 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.633 rmmod nvme_tcp 00:10:31.633 rmmod nvme_fabrics 00:10:31.633 rmmod nvme_keyring 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2882597 ']' 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2882597 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2882597 ']' 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2882597 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882597 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882597' 00:10:31.633 killing process with pid 2882597 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2882597 00:10:31.633 09:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2882597 00:10:33.008 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.008 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.009 09:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.914 00:10:34.914 real 0m27.083s 00:10:34.914 user 1m34.694s 00:10:34.914 sys 0m7.642s 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.914 ************************************ 00:10:34.914 END TEST nvmf_fio_target 00:10:34.914 ************************************ 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.914 ************************************ 00:10:34.914 START TEST nvmf_bdevio 00:10:34.914 ************************************ 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.914 * Looking for test storage... 00:10:34.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.914 --rc genhtml_branch_coverage=1 00:10:34.914 --rc genhtml_function_coverage=1 00:10:34.914 --rc genhtml_legend=1 00:10:34.914 --rc geninfo_all_blocks=1 00:10:34.914 --rc geninfo_unexecuted_blocks=1 00:10:34.914 00:10:34.914 ' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.914 --rc genhtml_branch_coverage=1 00:10:34.914 --rc genhtml_function_coverage=1 00:10:34.914 --rc genhtml_legend=1 00:10:34.914 --rc geninfo_all_blocks=1 00:10:34.914 --rc geninfo_unexecuted_blocks=1 00:10:34.914 00:10:34.914 ' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.914 --rc genhtml_branch_coverage=1 00:10:34.914 --rc genhtml_function_coverage=1 00:10:34.914 --rc genhtml_legend=1 00:10:34.914 --rc geninfo_all_blocks=1 00:10:34.914 --rc geninfo_unexecuted_blocks=1 00:10:34.914 00:10:34.914 ' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.914 --rc genhtml_branch_coverage=1 00:10:34.914 --rc genhtml_function_coverage=1 00:10:34.914 --rc genhtml_legend=1 00:10:34.914 --rc geninfo_all_blocks=1 00:10:34.914 --rc geninfo_unexecuted_blocks=1 00:10:34.914 00:10:34.914 ' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.914 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.915 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:37.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.446 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:37.447 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:37.447 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:37.447 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:10:37.447 00:10:37.447 --- 10.0.0.2 ping statistics --- 00:10:37.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.447 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:37.447 00:10:37.447 --- 10.0.0.1 ping statistics --- 00:10:37.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.447 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2887761 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2887761 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2887761 ']' 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.447 09:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.447 [2024-11-17 09:10:42.258810] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:37.447 [2024-11-17 09:10:42.258967] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.447 [2024-11-17 09:10:42.417633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.706 [2024-11-17 09:10:42.563006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.706 [2024-11-17 09:10:42.563099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.706 [2024-11-17 09:10:42.563127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.706 [2024-11-17 09:10:42.563152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.706 [2024-11-17 09:10:42.563171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.706 [2024-11-17 09:10:42.566029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:37.706 [2024-11-17 09:10:42.566116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:37.706 [2024-11-17 09:10:42.566227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.706 [2024-11-17 09:10:42.566244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.272 [2024-11-17 09:10:43.252544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.272 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 Malloc0 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 [2024-11-17 09:10:43.375056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:38.530 { 00:10:38.530 "params": { 00:10:38.530 "name": "Nvme$subsystem", 00:10:38.530 "trtype": "$TEST_TRANSPORT", 00:10:38.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.530 "adrfam": "ipv4", 00:10:38.530 "trsvcid": "$NVMF_PORT", 00:10:38.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.530 "hdgst": ${hdgst:-false}, 00:10:38.530 "ddgst": ${ddgst:-false} 00:10:38.530 }, 00:10:38.530 "method": "bdev_nvme_attach_controller" 00:10:38.530 } 00:10:38.530 EOF 00:10:38.530 )") 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:38.530 09:10:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:38.530 "params": { 00:10:38.530 "name": "Nvme1", 00:10:38.530 "trtype": "tcp", 00:10:38.530 "traddr": "10.0.0.2", 00:10:38.530 "adrfam": "ipv4", 00:10:38.530 "trsvcid": "4420", 00:10:38.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:38.530 "hdgst": false, 00:10:38.530 "ddgst": false 00:10:38.530 }, 00:10:38.530 "method": "bdev_nvme_attach_controller" 00:10:38.530 }' 00:10:38.530 [2024-11-17 09:10:43.461270] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:38.530 [2024-11-17 09:10:43.461425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887917 ] 00:10:38.788 [2024-11-17 09:10:43.599732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.788 [2024-11-17 09:10:43.733480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.788 [2024-11-17 09:10:43.733505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.788 [2024-11-17 09:10:43.733511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.354 I/O targets: 00:10:39.354 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:39.354 00:10:39.354 00:10:39.354 CUnit - A unit testing framework for C - Version 2.1-3 00:10:39.354 http://cunit.sourceforge.net/ 00:10:39.354 00:10:39.354 00:10:39.354 Suite: bdevio tests on: Nvme1n1 00:10:39.354 Test: blockdev write read block ...passed 00:10:39.612 Test: blockdev write zeroes read block ...passed 00:10:39.612 Test: blockdev write zeroes read no split ...passed 00:10:39.612 Test: blockdev write zeroes read split ...passed 00:10:39.612 Test: blockdev write zeroes read split partial ...passed 00:10:39.612 Test: blockdev reset ...[2024-11-17 09:10:44.514166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:39.612 [2024-11-17 09:10:44.514372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:10:39.612 [2024-11-17 09:10:44.529227] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:39.612 passed 00:10:39.612 Test: blockdev write read 8 blocks ...passed 00:10:39.612 Test: blockdev write read size > 128k ...passed 00:10:39.612 Test: blockdev write read invalid size ...passed 00:10:39.612 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.612 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.612 Test: blockdev write read max offset ...passed 00:10:39.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.870 Test: blockdev writev readv 8 blocks ...passed 00:10:39.870 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.870 Test: blockdev writev readv block ...passed 00:10:39.870 Test: blockdev writev readv size > 128k ...passed 00:10:39.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.870 Test: blockdev comparev and writev ...[2024-11-17 09:10:44.831055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.831134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.831181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.831210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.831745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.831780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.831821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.831847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.832324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.832359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.832394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.832867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.832900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:39.870 [2024-11-17 09:10:44.832933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.870 [2024-11-17 09:10:44.832958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:39.870 passed 00:10:40.128 Test: blockdev nvme passthru rw ...passed 00:10:40.128 Test: blockdev nvme passthru vendor specific ...[2024-11-17 09:10:44.916829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.128 [2024-11-17 09:10:44.916889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:40.128 [2024-11-17 09:10:44.917105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.128 [2024-11-17 09:10:44.917136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:40.128 [2024-11-17 09:10:44.917326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.128 [2024-11-17 09:10:44.917356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:40.128 [2024-11-17 09:10:44.917558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:40.128 [2024-11-17 09:10:44.917591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:40.128 passed 00:10:40.128 Test: blockdev nvme admin passthru ...passed 00:10:40.128 Test: blockdev copy ...passed 00:10:40.128 00:10:40.128 Run Summary: Type Total Ran Passed Failed Inactive 00:10:40.128 suites 1 1 n/a 0 0 00:10:40.128 tests 23 23 23 0 0 00:10:40.128 asserts 152 152 152 0 n/a 00:10:40.128 00:10:40.128 Elapsed time = 1.377 seconds 00:10:41.062 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.062 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.062 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.062 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.062 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:41.062 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.063 rmmod nvme_tcp 00:10:41.063 rmmod nvme_fabrics 00:10:41.063 rmmod nvme_keyring 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2887761 ']' 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2887761 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2887761 ']' 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2887761 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2887761 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2887761' 00:10:41.063 killing process with pid 2887761 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2887761 00:10:41.063 09:10:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2887761 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.507 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.415 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.415 00:10:44.415 real 0m9.487s 00:10:44.415 user 0m23.051s 00:10:44.415 sys 0m2.527s 00:10:44.415 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.415 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.415 ************************************ 00:10:44.415 END TEST nvmf_bdevio 00:10:44.415 ************************************ 00:10:44.415 09:10:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:44.415 00:10:44.415 real 4m29.669s 00:10:44.415 user 11m49.860s 00:10:44.415 sys 1m9.753s 00:10:44.416 09:10:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.416 09:10:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.416 ************************************ 00:10:44.416 END TEST nvmf_target_core 00:10:44.416 ************************************ 00:10:44.416 09:10:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:44.416 09:10:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.416 09:10:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.416 09:10:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.416 ************************************ 00:10:44.416 START TEST nvmf_target_extra 00:10:44.416 ************************************ 00:10:44.416 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:44.416 * Looking for test storage... 00:10:44.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:44.416 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.416 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.416 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.676 --rc genhtml_branch_coverage=1 00:10:44.676 --rc genhtml_function_coverage=1 00:10:44.676 --rc genhtml_legend=1 00:10:44.676 --rc geninfo_all_blocks=1 00:10:44.676 --rc geninfo_unexecuted_blocks=1 00:10:44.676 00:10:44.676 ' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.676 --rc genhtml_branch_coverage=1 00:10:44.676 --rc genhtml_function_coverage=1 00:10:44.676 --rc genhtml_legend=1 00:10:44.676 --rc geninfo_all_blocks=1 00:10:44.676 --rc geninfo_unexecuted_blocks=1 00:10:44.676 00:10:44.676 ' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.676 --rc genhtml_branch_coverage=1 00:10:44.676 --rc genhtml_function_coverage=1 00:10:44.676 --rc genhtml_legend=1 00:10:44.676 --rc geninfo_all_blocks=1 00:10:44.676 --rc geninfo_unexecuted_blocks=1 00:10:44.676 00:10:44.676 ' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.676 --rc genhtml_branch_coverage=1 00:10:44.676 --rc genhtml_function_coverage=1 00:10:44.676 --rc genhtml_legend=1 00:10:44.676 --rc geninfo_all_blocks=1 00:10:44.676 --rc geninfo_unexecuted_blocks=1 00:10:44.676 00:10:44.676 ' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.676 09:10:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 ************************************ 00:10:44.677 START TEST nvmf_example 00:10:44.677 ************************************ 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:44.677 * Looking for test storage... 00:10:44.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.677 --rc genhtml_branch_coverage=1 00:10:44.677 --rc genhtml_function_coverage=1 00:10:44.677 --rc genhtml_legend=1 00:10:44.677 --rc geninfo_all_blocks=1 00:10:44.677 --rc geninfo_unexecuted_blocks=1 00:10:44.677 00:10:44.677 ' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.677 --rc genhtml_branch_coverage=1 00:10:44.677 --rc genhtml_function_coverage=1 00:10:44.677 --rc genhtml_legend=1 00:10:44.677 --rc geninfo_all_blocks=1 00:10:44.677 --rc geninfo_unexecuted_blocks=1 00:10:44.677 00:10:44.677 ' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.677 --rc genhtml_branch_coverage=1 00:10:44.677 --rc genhtml_function_coverage=1 00:10:44.677 --rc genhtml_legend=1 00:10:44.677 --rc geninfo_all_blocks=1 00:10:44.677 --rc geninfo_unexecuted_blocks=1 00:10:44.677 00:10:44.677 ' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.677 --rc genhtml_branch_coverage=1 00:10:44.677 --rc genhtml_function_coverage=1 00:10:44.677 --rc genhtml_legend=1 00:10:44.677 --rc geninfo_all_blocks=1 00:10:44.677 --rc geninfo_unexecuted_blocks=1 00:10:44.677 00:10:44.677 ' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.677 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.678 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:47.210 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:47.210 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:47.210 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:47.210 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.210 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:10:47.210 00:10:47.210 --- 10.0.0.2 ping statistics --- 00:10:47.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.211 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:47.211 00:10:47.211 --- 10.0.0.1 ping statistics --- 00:10:47.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.211 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2890330 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2890330 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2890330 ']' 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.211 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.145 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:48.145 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:00.345 Initializing NVMe Controllers 00:11:00.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:00.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:00.345 Initialization complete. Launching workers. 00:11:00.345 ======================================================== 00:11:00.345 Latency(us) 00:11:00.345 Device Information : IOPS MiB/s Average min max 00:11:00.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11844.44 46.27 5402.96 1271.11 15785.84 00:11:00.345 ======================================================== 00:11:00.345 Total : 11844.44 46.27 5402.96 1271.11 15785.84 00:11:00.345 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.345 rmmod nvme_tcp 00:11:00.345 rmmod nvme_fabrics 00:11:00.345 rmmod nvme_keyring 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2890330 ']' 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2890330 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2890330 ']' 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2890330 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890330 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890330' 00:11:00.345 killing process with pid 2890330 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2890330 00:11:00.345 09:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2890330 00:11:00.345 nvmf threads initialize successfully 00:11:00.345 bdev subsystem init successfully 00:11:00.345 created a nvmf target service 00:11:00.345 create targets's poll groups done 00:11:00.345 all subsystems of target started 00:11:00.345 nvmf target is running 00:11:00.345 all subsystems of target stopped 00:11:00.345 destroy targets's poll groups done 00:11:00.345 destroyed the nvmf target service 00:11:00.345 bdev subsystem finish successfully 00:11:00.345 nvmf threads destroy successfully 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.345 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.253 00:11:02.253 real 0m17.360s 00:11:02.253 user 0m49.176s 00:11:02.253 sys 0m3.281s 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.253 ************************************ 00:11:02.253 END TEST nvmf_example 00:11:02.253 ************************************ 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.253 ************************************ 00:11:02.253 START TEST nvmf_filesystem 00:11:02.253 ************************************ 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:02.253 * Looking for test storage... 00:11:02.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:02.253 09:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.253 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:02.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.254 --rc genhtml_branch_coverage=1 00:11:02.254 --rc genhtml_function_coverage=1 00:11:02.254 --rc genhtml_legend=1 00:11:02.254 --rc geninfo_all_blocks=1 00:11:02.254 --rc geninfo_unexecuted_blocks=1 00:11:02.254 00:11:02.254 ' 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:02.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.254 --rc genhtml_branch_coverage=1 00:11:02.254 --rc genhtml_function_coverage=1 00:11:02.254 --rc genhtml_legend=1 00:11:02.254 --rc geninfo_all_blocks=1 00:11:02.254 --rc geninfo_unexecuted_blocks=1 00:11:02.254 00:11:02.254 ' 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:02.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.254 --rc genhtml_branch_coverage=1 00:11:02.254 --rc genhtml_function_coverage=1 00:11:02.254 --rc genhtml_legend=1 00:11:02.254 --rc geninfo_all_blocks=1 00:11:02.254 --rc geninfo_unexecuted_blocks=1 00:11:02.254 00:11:02.254 ' 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:02.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.254 --rc genhtml_branch_coverage=1 00:11:02.254 --rc genhtml_function_coverage=1 00:11:02.254 --rc genhtml_legend=1 00:11:02.254 --rc geninfo_all_blocks=1 00:11:02.254 --rc geninfo_unexecuted_blocks=1 00:11:02.254 00:11:02.254 ' 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:02.254 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:02.255 #define SPDK_CONFIG_H 00:11:02.255 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:02.255 #define SPDK_CONFIG_APPS 1 00:11:02.255 #define SPDK_CONFIG_ARCH native 00:11:02.255 #define SPDK_CONFIG_ASAN 1 00:11:02.255 #undef SPDK_CONFIG_AVAHI 00:11:02.255 #undef SPDK_CONFIG_CET 00:11:02.255 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:02.255 #define SPDK_CONFIG_COVERAGE 1 00:11:02.255 #define SPDK_CONFIG_CROSS_PREFIX 00:11:02.255 #undef SPDK_CONFIG_CRYPTO 00:11:02.255 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:02.255 #undef SPDK_CONFIG_CUSTOMOCF 00:11:02.255 #undef SPDK_CONFIG_DAOS 00:11:02.255 #define SPDK_CONFIG_DAOS_DIR 00:11:02.255 #define SPDK_CONFIG_DEBUG 1 00:11:02.255 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:02.255 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:02.255 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:02.255 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:02.255 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:02.255 #undef SPDK_CONFIG_DPDK_UADK 00:11:02.255 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:02.255 #define SPDK_CONFIG_EXAMPLES 1 00:11:02.255 #undef SPDK_CONFIG_FC 00:11:02.255 #define SPDK_CONFIG_FC_PATH 00:11:02.255 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:02.255 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:02.255 #define SPDK_CONFIG_FSDEV 1 00:11:02.255 #undef SPDK_CONFIG_FUSE 00:11:02.255 #undef SPDK_CONFIG_FUZZER 00:11:02.255 #define SPDK_CONFIG_FUZZER_LIB 00:11:02.255 #undef SPDK_CONFIG_GOLANG 00:11:02.255 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:02.255 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:02.255 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:02.255 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:02.255 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:02.255 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:02.255 #undef SPDK_CONFIG_HAVE_LZ4 00:11:02.255 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:02.255 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:02.255 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:02.255 #define SPDK_CONFIG_IDXD 1 00:11:02.255 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:02.255 #undef SPDK_CONFIG_IPSEC_MB 00:11:02.255 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:02.255 #define SPDK_CONFIG_ISAL 1 00:11:02.255 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:02.255 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:02.255 #define SPDK_CONFIG_LIBDIR 00:11:02.255 #undef SPDK_CONFIG_LTO 00:11:02.255 #define SPDK_CONFIG_MAX_LCORES 128 00:11:02.255 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:02.255 #define SPDK_CONFIG_NVME_CUSE 1 00:11:02.255 #undef SPDK_CONFIG_OCF 00:11:02.255 #define SPDK_CONFIG_OCF_PATH 00:11:02.255 #define SPDK_CONFIG_OPENSSL_PATH 00:11:02.255 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:02.255 #define SPDK_CONFIG_PGO_DIR 00:11:02.255 #undef SPDK_CONFIG_PGO_USE 00:11:02.255 #define SPDK_CONFIG_PREFIX /usr/local 00:11:02.255 #undef SPDK_CONFIG_RAID5F 00:11:02.255 #undef SPDK_CONFIG_RBD 00:11:02.255 #define SPDK_CONFIG_RDMA 1 00:11:02.255 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:02.255 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:02.255 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:02.255 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:02.255 #define SPDK_CONFIG_SHARED 1 00:11:02.255 #undef SPDK_CONFIG_SMA 00:11:02.255 #define SPDK_CONFIG_TESTS 1 00:11:02.255 #undef SPDK_CONFIG_TSAN 00:11:02.255 #define SPDK_CONFIG_UBLK 1 00:11:02.255 #define SPDK_CONFIG_UBSAN 1 00:11:02.255 #undef SPDK_CONFIG_UNIT_TESTS 00:11:02.255 #undef SPDK_CONFIG_URING 00:11:02.255 #define SPDK_CONFIG_URING_PATH 00:11:02.255 #undef SPDK_CONFIG_URING_ZNS 00:11:02.255 #undef SPDK_CONFIG_USDT 00:11:02.255 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:02.255 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:02.255 #undef SPDK_CONFIG_VFIO_USER 00:11:02.255 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:02.255 #define SPDK_CONFIG_VHOST 1 00:11:02.255 #define SPDK_CONFIG_VIRTIO 1 00:11:02.255 #undef SPDK_CONFIG_VTUNE 00:11:02.255 #define SPDK_CONFIG_VTUNE_DIR 00:11:02.255 #define SPDK_CONFIG_WERROR 1 00:11:02.255 #define SPDK_CONFIG_WPDK_DIR 00:11:02.255 #undef SPDK_CONFIG_XNVME 00:11:02.255 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.255 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:02.256 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:02.257 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2892279 ]] 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2892279 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.4r2oxW 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4r2oxW/tests/target /tmp/spdk.4r2oxW 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55043518464 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988524032 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6945005568 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:02.258 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993989632 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=274432 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:02.259 * Looking for test storage... 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55043518464 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9159598080 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.259 --rc genhtml_branch_coverage=1 00:11:02.259 --rc genhtml_function_coverage=1 00:11:02.259 --rc genhtml_legend=1 00:11:02.259 --rc geninfo_all_blocks=1 00:11:02.259 --rc geninfo_unexecuted_blocks=1 00:11:02.259 00:11:02.259 ' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.259 --rc genhtml_branch_coverage=1 00:11:02.259 --rc genhtml_function_coverage=1 00:11:02.259 --rc genhtml_legend=1 00:11:02.259 --rc geninfo_all_blocks=1 00:11:02.259 --rc geninfo_unexecuted_blocks=1 00:11:02.259 00:11:02.259 ' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.259 --rc genhtml_branch_coverage=1 00:11:02.259 --rc genhtml_function_coverage=1 00:11:02.259 --rc genhtml_legend=1 00:11:02.259 --rc geninfo_all_blocks=1 00:11:02.259 --rc geninfo_unexecuted_blocks=1 00:11:02.259 00:11:02.259 ' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.259 --rc genhtml_branch_coverage=1 00:11:02.259 --rc genhtml_function_coverage=1 00:11:02.259 --rc genhtml_legend=1 00:11:02.259 --rc geninfo_all_blocks=1 00:11:02.259 --rc geninfo_unexecuted_blocks=1 00:11:02.259 00:11:02.259 ' 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:02.259 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.260 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.793 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:04.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:04.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:04.794 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:04.794 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:11:04.794 00:11:04.794 --- 10.0.0.2 ping statistics --- 00:11:04.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.794 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:11:04.794 00:11:04.794 --- 10.0.0.1 ping statistics --- 00:11:04.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.794 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.794 ************************************ 00:11:04.794 START TEST nvmf_filesystem_no_in_capsule 00:11:04.794 ************************************ 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:04.794 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2893926 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2893926 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2893926 ']' 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.795 09:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.795 [2024-11-17 09:11:09.687468] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:04.795 [2024-11-17 09:11:09.687606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.053 [2024-11-17 09:11:09.832911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.053 [2024-11-17 09:11:09.968908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.053 [2024-11-17 09:11:09.968993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.053 [2024-11-17 09:11:09.969019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.053 [2024-11-17 09:11:09.969043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.053 [2024-11-17 09:11:09.969064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.053 [2024-11-17 09:11:09.972135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.053 [2024-11-17 09:11:09.972208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.053 [2024-11-17 09:11:09.972306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.053 [2024-11-17 09:11:09.972311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.986 [2024-11-17 09:11:10.729912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.986 09:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 Malloc1 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 [2024-11-17 09:11:11.327447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:06.552 { 00:11:06.552 "name": "Malloc1", 00:11:06.552 "aliases": [ 00:11:06.552 "22a3e95d-8313-4ea6-8926-99e3eae2c011" 00:11:06.552 ], 00:11:06.552 "product_name": "Malloc disk", 00:11:06.552 "block_size": 512, 00:11:06.552 "num_blocks": 1048576, 00:11:06.552 "uuid": "22a3e95d-8313-4ea6-8926-99e3eae2c011", 00:11:06.552 "assigned_rate_limits": { 00:11:06.552 "rw_ios_per_sec": 0, 00:11:06.552 "rw_mbytes_per_sec": 0, 00:11:06.552 "r_mbytes_per_sec": 0, 00:11:06.552 "w_mbytes_per_sec": 0 00:11:06.552 }, 00:11:06.552 "claimed": true, 00:11:06.552 "claim_type": "exclusive_write", 00:11:06.552 "zoned": false, 00:11:06.552 "supported_io_types": { 00:11:06.552 "read": true, 00:11:06.552 "write": true, 00:11:06.552 "unmap": true, 00:11:06.552 "flush": true, 00:11:06.552 "reset": true, 00:11:06.552 "nvme_admin": false, 00:11:06.552 "nvme_io": false, 00:11:06.552 "nvme_io_md": false, 00:11:06.552 "write_zeroes": true, 00:11:06.552 "zcopy": true, 00:11:06.552 "get_zone_info": false, 00:11:06.552 "zone_management": false, 00:11:06.552 "zone_append": false, 00:11:06.552 "compare": false, 00:11:06.552 "compare_and_write": false, 00:11:06.552 "abort": true, 00:11:06.552 "seek_hole": false, 00:11:06.552 "seek_data": false, 00:11:06.552 "copy": true, 00:11:06.552 "nvme_iov_md": false 00:11:06.552 }, 00:11:06.552 "memory_domains": [ 00:11:06.552 { 00:11:06.552 "dma_device_id": "system", 00:11:06.552 "dma_device_type": 1 00:11:06.552 }, 00:11:06.552 { 00:11:06.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.552 "dma_device_type": 2 00:11:06.552 } 00:11:06.552 ], 00:11:06.552 "driver_specific": {} 00:11:06.552 } 00:11:06.552 ]' 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:06.552 09:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.118 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.118 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:07.118 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.118 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:07.118 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:09.645 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:09.645 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:09.645 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.645 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:09.645 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.645 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:09.646 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:09.904 09:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.837 ************************************ 00:11:10.837 START TEST filesystem_ext4 00:11:10.837 ************************************ 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:10.837 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:10.837 mke2fs 1.47.0 (5-Feb-2023) 00:11:11.095 Discarding device blocks: 0/522240 done 00:11:11.095 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:11.095 Filesystem UUID: 79353446-7cec-4da3-8202-9c3b3956cb2f 00:11:11.095 Superblock backups stored on blocks: 00:11:11.095 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:11.095 00:11:11.095 Allocating group tables: 0/64 done 00:11:11.095 Writing inode tables: 0/64 done 00:11:11.095 Creating journal (8192 blocks): done 00:11:12.284 Writing superblocks and filesystem accounting information: 0/64 done 00:11:12.284 00:11:12.285 09:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:12.285 09:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2893926 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.839 00:11:18.839 real 0m7.435s 00:11:18.839 user 0m0.017s 00:11:18.839 sys 0m0.072s 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 ************************************ 00:11:18.839 END TEST filesystem_ext4 00:11:18.839 ************************************ 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 ************************************ 00:11:18.839 START TEST filesystem_btrfs 00:11:18.839 ************************************ 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:18.839 btrfs-progs v6.8.1 00:11:18.839 See https://btrfs.readthedocs.io for more information. 00:11:18.839 00:11:18.839 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:18.839 NOTE: several default settings have changed in version 5.15, please make sure 00:11:18.839 this does not affect your deployments: 00:11:18.839 - DUP for metadata (-m dup) 00:11:18.839 - enabled no-holes (-O no-holes) 00:11:18.839 - enabled free-space-tree (-R free-space-tree) 00:11:18.839 00:11:18.839 Label: (null) 00:11:18.839 UUID: 27b78fa3-718e-4871-864d-0e5a02cb78b9 00:11:18.839 Node size: 16384 00:11:18.839 Sector size: 4096 (CPU page size: 4096) 00:11:18.839 Filesystem size: 510.00MiB 00:11:18.839 Block group profiles: 00:11:18.839 Data: single 8.00MiB 00:11:18.839 Metadata: DUP 32.00MiB 00:11:18.839 System: DUP 8.00MiB 00:11:18.839 SSD detected: yes 00:11:18.839 Zoned device: no 00:11:18.839 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:18.839 Checksum: crc32c 00:11:18.839 Number of devices: 1 00:11:18.839 Devices: 00:11:18.839 ID SIZE PATH 00:11:18.839 1 510.00MiB /dev/nvme0n1p1 00:11:18.839 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2893926 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.839 00:11:18.839 real 0m0.422s 00:11:18.839 user 0m0.018s 00:11:18.839 sys 0m0.091s 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 ************************************ 00:11:18.839 END TEST filesystem_btrfs 00:11:18.839 ************************************ 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.839 ************************************ 00:11:18.839 START TEST filesystem_xfs 00:11:18.839 ************************************ 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:18.839 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:18.840 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:18.840 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:18.840 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:18.840 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:18.840 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:18.840 = sectsz=512 attr=2, projid32bit=1 00:11:18.840 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:18.840 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:18.840 data = bsize=4096 blocks=130560, imaxpct=25 00:11:18.840 = sunit=0 swidth=0 blks 00:11:18.840 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:18.840 log =internal log bsize=4096 blocks=16384, version=2 00:11:18.840 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:18.840 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:19.773 Discarding blocks...Done. 00:11:19.773 09:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:19.773 09:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.670 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2893926 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.671 00:11:21.671 real 0m2.663s 00:11:21.671 user 0m0.013s 00:11:21.671 sys 0m0.059s 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.671 ************************************ 00:11:21.671 END TEST filesystem_xfs 00:11:21.671 ************************************ 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:21.671 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2893926 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2893926 ']' 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2893926 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2893926 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.928 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2893926' 00:11:21.928 killing process with pid 2893926 00:11:21.929 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2893926 00:11:21.929 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2893926 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:24.534 00:11:24.534 real 0m19.686s 00:11:24.534 user 1m14.467s 00:11:24.534 sys 0m2.592s 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 ************************************ 00:11:24.534 END TEST nvmf_filesystem_no_in_capsule 00:11:24.534 ************************************ 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 ************************************ 00:11:24.534 START TEST nvmf_filesystem_in_capsule 00:11:24.534 ************************************ 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2896416 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2896416 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2896416 ']' 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.534 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 [2024-11-17 09:11:29.434128] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:24.534 [2024-11-17 09:11:29.434277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.793 [2024-11-17 09:11:29.590551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.793 [2024-11-17 09:11:29.734195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.793 [2024-11-17 09:11:29.734292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.793 [2024-11-17 09:11:29.734318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.793 [2024-11-17 09:11:29.734344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.793 [2024-11-17 09:11:29.734377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.793 [2024-11-17 09:11:29.737283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.793 [2024-11-17 09:11:29.737363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.793 [2024-11-17 09:11:29.737468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.793 [2024-11-17 09:11:29.737472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.728 [2024-11-17 09:11:30.477987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.728 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.296 Malloc1 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.296 [2024-11-17 09:11:31.075027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:26.296 { 00:11:26.296 "name": "Malloc1", 00:11:26.296 "aliases": [ 00:11:26.296 "e9d7840b-1aa1-4bb6-ab79-92fadcd741a3" 00:11:26.296 ], 00:11:26.296 "product_name": "Malloc disk", 00:11:26.296 "block_size": 512, 00:11:26.296 "num_blocks": 1048576, 00:11:26.296 "uuid": "e9d7840b-1aa1-4bb6-ab79-92fadcd741a3", 00:11:26.296 "assigned_rate_limits": { 00:11:26.296 "rw_ios_per_sec": 0, 00:11:26.296 "rw_mbytes_per_sec": 0, 00:11:26.296 "r_mbytes_per_sec": 0, 00:11:26.296 "w_mbytes_per_sec": 0 00:11:26.296 }, 00:11:26.296 "claimed": true, 00:11:26.296 "claim_type": "exclusive_write", 00:11:26.296 "zoned": false, 00:11:26.296 "supported_io_types": { 00:11:26.296 "read": true, 00:11:26.296 "write": true, 00:11:26.296 "unmap": true, 00:11:26.296 "flush": true, 00:11:26.296 "reset": true, 00:11:26.296 "nvme_admin": false, 00:11:26.296 "nvme_io": false, 00:11:26.296 "nvme_io_md": false, 00:11:26.296 "write_zeroes": true, 00:11:26.296 "zcopy": true, 00:11:26.296 "get_zone_info": false, 00:11:26.296 "zone_management": false, 00:11:26.296 "zone_append": false, 00:11:26.296 "compare": false, 00:11:26.296 "compare_and_write": false, 00:11:26.296 "abort": true, 00:11:26.296 "seek_hole": false, 00:11:26.296 "seek_data": false, 00:11:26.296 "copy": true, 00:11:26.296 "nvme_iov_md": false 00:11:26.296 }, 00:11:26.296 "memory_domains": [ 00:11:26.296 { 00:11:26.296 "dma_device_id": "system", 00:11:26.296 "dma_device_type": 1 00:11:26.296 }, 00:11:26.296 { 00:11:26.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.296 "dma_device_type": 2 00:11:26.296 } 00:11:26.296 ], 00:11:26.296 "driver_specific": {} 00:11:26.296 } 00:11:26.296 ]' 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:26.296 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.863 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.863 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.863 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.863 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:26.863 09:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:29.391 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:29.391 09:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:29.649 09:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.022 ************************************ 00:11:31.022 START TEST filesystem_in_capsule_ext4 00:11:31.022 ************************************ 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:31.022 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:31.023 mke2fs 1.47.0 (5-Feb-2023) 00:11:31.023 Discarding device blocks: 0/522240 done 00:11:31.023 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:31.023 Filesystem UUID: 78743045-08a1-4c1c-86d3-0f4a86050966 00:11:31.023 Superblock backups stored on blocks: 00:11:31.023 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:31.023 00:11:31.023 Allocating group tables: 0/64 done 00:11:31.023 Writing inode tables: 0/64 done 00:11:31.023 Creating journal (8192 blocks): done 00:11:31.023 Writing superblocks and filesystem accounting information: 0/64 done 00:11:31.023 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:31.023 09:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2896416 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.578 00:11:37.578 real 0m6.076s 00:11:37.578 user 0m0.017s 00:11:37.578 sys 0m0.057s 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 ************************************ 00:11:37.578 END TEST filesystem_in_capsule_ext4 00:11:37.578 ************************************ 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.578 ************************************ 00:11:37.578 START TEST filesystem_in_capsule_btrfs 00:11:37.578 ************************************ 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:37.578 09:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.578 btrfs-progs v6.8.1 00:11:37.578 See https://btrfs.readthedocs.io for more information. 00:11:37.578 00:11:37.578 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.578 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.578 this does not affect your deployments: 00:11:37.578 - DUP for metadata (-m dup) 00:11:37.578 - enabled no-holes (-O no-holes) 00:11:37.578 - enabled free-space-tree (-R free-space-tree) 00:11:37.578 00:11:37.578 Label: (null) 00:11:37.578 UUID: a53db2cd-d205-4f5c-98d0-25294761bfd1 00:11:37.578 Node size: 16384 00:11:37.578 Sector size: 4096 (CPU page size: 4096) 00:11:37.578 Filesystem size: 510.00MiB 00:11:37.578 Block group profiles: 00:11:37.578 Data: single 8.00MiB 00:11:37.578 Metadata: DUP 32.00MiB 00:11:37.578 System: DUP 8.00MiB 00:11:37.578 SSD detected: yes 00:11:37.578 Zoned device: no 00:11:37.578 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.578 Checksum: crc32c 00:11:37.578 Number of devices: 1 00:11:37.578 Devices: 00:11:37.578 ID SIZE PATH 00:11:37.578 1 510.00MiB /dev/nvme0n1p1 00:11:37.578 00:11:37.578 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:37.578 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.578 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.578 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2896416 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.579 00:11:37.579 real 0m0.809s 00:11:37.579 user 0m0.016s 00:11:37.579 sys 0m0.108s 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.579 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.579 ************************************ 00:11:37.579 END TEST filesystem_in_capsule_btrfs 00:11:37.579 ************************************ 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.837 ************************************ 00:11:37.837 START TEST filesystem_in_capsule_xfs 00:11:37.837 ************************************ 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:37.837 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:37.837 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:37.837 = sectsz=512 attr=2, projid32bit=1 00:11:37.837 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:37.837 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:37.837 data = bsize=4096 blocks=130560, imaxpct=25 00:11:37.837 = sunit=0 swidth=0 blks 00:11:37.837 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:37.837 log =internal log bsize=4096 blocks=16384, version=2 00:11:37.837 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:37.837 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:38.771 Discarding blocks...Done. 00:11:38.771 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:38.771 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2896416 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.301 00:11:41.301 real 0m3.499s 00:11:41.301 user 0m0.016s 00:11:41.301 sys 0m0.061s 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.301 ************************************ 00:11:41.301 END TEST filesystem_in_capsule_xfs 00:11:41.301 ************************************ 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.301 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2896416 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2896416 ']' 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2896416 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896416 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896416' 00:11:41.560 killing process with pid 2896416 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2896416 00:11:41.560 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2896416 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:44.099 00:11:44.099 real 0m19.483s 00:11:44.099 user 1m13.715s 00:11:44.099 sys 0m2.541s 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.099 ************************************ 00:11:44.099 END TEST nvmf_filesystem_in_capsule 00:11:44.099 ************************************ 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.099 rmmod nvme_tcp 00:11:44.099 rmmod nvme_fabrics 00:11:44.099 rmmod nvme_keyring 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.099 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.023 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.023 00:11:46.023 real 0m44.067s 00:11:46.024 user 2m29.259s 00:11:46.024 sys 0m6.959s 00:11:46.024 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.024 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.024 ************************************ 00:11:46.024 END TEST nvmf_filesystem 00:11:46.024 ************************************ 00:11:46.024 09:11:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:46.024 09:11:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.024 09:11:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.024 09:11:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.024 ************************************ 00:11:46.024 START TEST nvmf_target_discovery 00:11:46.024 ************************************ 00:11:46.024 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:46.282 * Looking for test storage... 00:11:46.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.282 --rc genhtml_branch_coverage=1 00:11:46.282 --rc genhtml_function_coverage=1 00:11:46.282 --rc genhtml_legend=1 00:11:46.282 --rc geninfo_all_blocks=1 00:11:46.282 --rc geninfo_unexecuted_blocks=1 00:11:46.282 00:11:46.282 ' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.282 --rc genhtml_branch_coverage=1 00:11:46.282 --rc genhtml_function_coverage=1 00:11:46.282 --rc genhtml_legend=1 00:11:46.282 --rc geninfo_all_blocks=1 00:11:46.282 --rc geninfo_unexecuted_blocks=1 00:11:46.282 00:11:46.282 ' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.282 --rc genhtml_branch_coverage=1 00:11:46.282 --rc genhtml_function_coverage=1 00:11:46.282 --rc genhtml_legend=1 00:11:46.282 --rc geninfo_all_blocks=1 00:11:46.282 --rc geninfo_unexecuted_blocks=1 00:11:46.282 00:11:46.282 ' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.282 --rc genhtml_branch_coverage=1 00:11:46.282 --rc genhtml_function_coverage=1 00:11:46.282 --rc genhtml_legend=1 00:11:46.282 --rc geninfo_all_blocks=1 00:11:46.282 --rc geninfo_unexecuted_blocks=1 00:11:46.282 00:11:46.282 ' 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.282 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.283 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:48.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.183 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:48.184 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:48.184 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:48.184 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.184 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:48.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:11:48.443 00:11:48.443 --- 10.0.0.2 ping statistics --- 00:11:48.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.443 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:48.443 00:11:48.443 --- 10.0.0.1 ping statistics --- 00:11:48.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.443 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2900865 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2900865 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2900865 ']' 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.443 09:11:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.701 [2024-11-17 09:11:53.541289] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:48.701 [2024-11-17 09:11:53.541478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.701 [2024-11-17 09:11:53.689959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.959 [2024-11-17 09:11:53.836539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.959 [2024-11-17 09:11:53.836633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.959 [2024-11-17 09:11:53.836659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.959 [2024-11-17 09:11:53.836683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.959 [2024-11-17 09:11:53.836703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.959 [2024-11-17 09:11:53.839604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.959 [2024-11-17 09:11:53.839665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.959 [2024-11-17 09:11:53.839717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.959 [2024-11-17 09:11:53.839724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.525 [2024-11-17 09:11:54.495719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.525 Null1 00:11:49.525 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.526 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:49.526 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.526 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 [2024-11-17 09:11:54.551496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.784 Null2 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.784 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 Null3 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 Null4 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.785 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:50.044 00:11:50.044 Discovery Log Number of Records 6, Generation counter 6 00:11:50.044 =====Discovery Log Entry 0====== 00:11:50.044 trtype: tcp 00:11:50.044 adrfam: ipv4 00:11:50.044 subtype: current discovery subsystem 00:11:50.044 treq: not required 00:11:50.044 portid: 0 00:11:50.044 trsvcid: 4420 00:11:50.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:50.044 traddr: 10.0.0.2 00:11:50.044 eflags: explicit discovery connections, duplicate discovery information 00:11:50.044 sectype: none 00:11:50.044 =====Discovery Log Entry 1====== 00:11:50.044 trtype: tcp 00:11:50.044 adrfam: ipv4 00:11:50.044 subtype: nvme subsystem 00:11:50.044 treq: not required 00:11:50.044 portid: 0 00:11:50.044 trsvcid: 4420 00:11:50.044 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:50.044 traddr: 10.0.0.2 00:11:50.044 eflags: none 00:11:50.044 sectype: none 00:11:50.044 =====Discovery Log Entry 2====== 00:11:50.044 trtype: tcp 00:11:50.044 adrfam: ipv4 00:11:50.044 subtype: nvme subsystem 00:11:50.044 treq: not required 00:11:50.044 portid: 0 00:11:50.044 trsvcid: 4420 00:11:50.044 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:50.044 traddr: 10.0.0.2 00:11:50.044 eflags: none 00:11:50.044 sectype: none 00:11:50.044 =====Discovery Log Entry 3====== 00:11:50.044 trtype: tcp 00:11:50.044 adrfam: ipv4 00:11:50.044 subtype: nvme subsystem 00:11:50.044 treq: not required 00:11:50.044 portid: 0 00:11:50.044 trsvcid: 4420 00:11:50.044 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:50.044 traddr: 10.0.0.2 00:11:50.044 eflags: none 00:11:50.044 sectype: none 00:11:50.044 =====Discovery Log Entry 4====== 00:11:50.044 trtype: tcp 00:11:50.044 adrfam: ipv4 00:11:50.044 subtype: nvme subsystem 00:11:50.044 treq: not required 00:11:50.044 portid: 0 00:11:50.044 trsvcid: 4420 00:11:50.044 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:50.044 traddr: 10.0.0.2 00:11:50.044 eflags: none 00:11:50.044 sectype: none 00:11:50.044 =====Discovery Log Entry 5====== 00:11:50.044 trtype: tcp 00:11:50.044 adrfam: ipv4 00:11:50.044 subtype: discovery subsystem referral 00:11:50.044 treq: not required 00:11:50.044 portid: 0 00:11:50.044 trsvcid: 4430 00:11:50.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:50.044 traddr: 10.0.0.2 00:11:50.044 eflags: none 00:11:50.044 sectype: none 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:50.044 Perform nvmf subsystem discovery via RPC 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.044 [ 00:11:50.044 { 00:11:50.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:50.044 "subtype": "Discovery", 00:11:50.044 "listen_addresses": [ 00:11:50.044 { 00:11:50.044 "trtype": "TCP", 00:11:50.044 "adrfam": "IPv4", 00:11:50.044 "traddr": "10.0.0.2", 00:11:50.044 "trsvcid": "4420" 00:11:50.044 } 00:11:50.044 ], 00:11:50.044 "allow_any_host": true, 00:11:50.044 "hosts": [] 00:11:50.044 }, 00:11:50.044 { 00:11:50.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.044 "subtype": "NVMe", 00:11:50.044 "listen_addresses": [ 00:11:50.044 { 00:11:50.044 "trtype": "TCP", 00:11:50.044 "adrfam": "IPv4", 00:11:50.044 "traddr": "10.0.0.2", 00:11:50.044 "trsvcid": "4420" 00:11:50.044 } 00:11:50.044 ], 00:11:50.044 "allow_any_host": true, 00:11:50.044 "hosts": [], 00:11:50.044 "serial_number": "SPDK00000000000001", 00:11:50.044 "model_number": "SPDK bdev Controller", 00:11:50.044 "max_namespaces": 32, 00:11:50.044 "min_cntlid": 1, 00:11:50.044 "max_cntlid": 65519, 00:11:50.044 "namespaces": [ 00:11:50.044 { 00:11:50.044 "nsid": 1, 00:11:50.044 "bdev_name": "Null1", 00:11:50.044 "name": "Null1", 00:11:50.044 "nguid": "E4C21DF1B9824235A31030EF8B6BB676", 00:11:50.044 "uuid": "e4c21df1-b982-4235-a310-30ef8b6bb676" 00:11:50.044 } 00:11:50.044 ] 00:11:50.044 }, 00:11:50.044 { 00:11:50.044 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:50.044 "subtype": "NVMe", 00:11:50.044 "listen_addresses": [ 00:11:50.044 { 00:11:50.044 "trtype": "TCP", 00:11:50.044 "adrfam": "IPv4", 00:11:50.044 "traddr": "10.0.0.2", 00:11:50.044 "trsvcid": "4420" 00:11:50.044 } 00:11:50.044 ], 00:11:50.044 "allow_any_host": true, 00:11:50.044 "hosts": [], 00:11:50.044 "serial_number": "SPDK00000000000002", 00:11:50.044 "model_number": "SPDK bdev Controller", 00:11:50.044 "max_namespaces": 32, 00:11:50.044 "min_cntlid": 1, 00:11:50.044 "max_cntlid": 65519, 00:11:50.044 "namespaces": [ 00:11:50.044 { 00:11:50.044 "nsid": 1, 00:11:50.044 "bdev_name": "Null2", 00:11:50.044 "name": "Null2", 00:11:50.044 "nguid": "4C7C0D95FCDE4D07BF86E4C547AEDCDE", 00:11:50.044 "uuid": "4c7c0d95-fcde-4d07-bf86-e4c547aedcde" 00:11:50.044 } 00:11:50.044 ] 00:11:50.044 }, 00:11:50.044 { 00:11:50.044 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:50.044 "subtype": "NVMe", 00:11:50.044 "listen_addresses": [ 00:11:50.044 { 00:11:50.044 "trtype": "TCP", 00:11:50.044 "adrfam": "IPv4", 00:11:50.044 "traddr": "10.0.0.2", 00:11:50.044 "trsvcid": "4420" 00:11:50.044 } 00:11:50.044 ], 00:11:50.044 "allow_any_host": true, 00:11:50.044 "hosts": [], 00:11:50.044 "serial_number": "SPDK00000000000003", 00:11:50.044 "model_number": "SPDK bdev Controller", 00:11:50.044 "max_namespaces": 32, 00:11:50.044 "min_cntlid": 1, 00:11:50.044 "max_cntlid": 65519, 00:11:50.044 "namespaces": [ 00:11:50.044 { 00:11:50.044 "nsid": 1, 00:11:50.044 "bdev_name": "Null3", 00:11:50.044 "name": "Null3", 00:11:50.044 "nguid": "96996E6DFEBD4A3E8F0D92F21EB0FC47", 00:11:50.044 "uuid": "96996e6d-febd-4a3e-8f0d-92f21eb0fc47" 00:11:50.044 } 00:11:50.044 ] 00:11:50.044 }, 00:11:50.044 { 00:11:50.044 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:50.044 "subtype": "NVMe", 00:11:50.044 "listen_addresses": [ 00:11:50.044 { 00:11:50.044 "trtype": "TCP", 00:11:50.044 "adrfam": "IPv4", 00:11:50.044 "traddr": "10.0.0.2", 00:11:50.044 "trsvcid": "4420" 00:11:50.044 } 00:11:50.044 ], 00:11:50.044 "allow_any_host": true, 00:11:50.044 "hosts": [], 00:11:50.044 "serial_number": "SPDK00000000000004", 00:11:50.044 "model_number": "SPDK bdev Controller", 00:11:50.044 "max_namespaces": 32, 00:11:50.044 "min_cntlid": 1, 00:11:50.044 "max_cntlid": 65519, 00:11:50.044 "namespaces": [ 00:11:50.044 { 00:11:50.044 "nsid": 1, 00:11:50.044 "bdev_name": "Null4", 00:11:50.044 "name": "Null4", 00:11:50.044 "nguid": "EE3B45C75DC24A2796B7D751A8D6D330", 00:11:50.044 "uuid": "ee3b45c7-5dc2-4a27-96b7-d751a8d6d330" 00:11:50.044 } 00:11:50.044 ] 00:11:50.044 } 00:11:50.044 ] 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:50.044 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.045 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.045 rmmod nvme_tcp 00:11:50.045 rmmod nvme_fabrics 00:11:50.045 rmmod nvme_keyring 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2900865 ']' 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2900865 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2900865 ']' 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2900865 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.045 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900865 00:11:50.303 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.303 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.303 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900865' 00:11:50.303 killing process with pid 2900865 00:11:50.303 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2900865 00:11:50.303 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2900865 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.237 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.781 00:11:53.781 real 0m7.234s 00:11:53.781 user 0m9.303s 00:11:53.781 sys 0m2.086s 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.781 ************************************ 00:11:53.781 END TEST nvmf_target_discovery 00:11:53.781 ************************************ 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.781 ************************************ 00:11:53.781 START TEST nvmf_referrals 00:11:53.781 ************************************ 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:53.781 * Looking for test storage... 00:11:53.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.781 --rc genhtml_branch_coverage=1 00:11:53.781 --rc genhtml_function_coverage=1 00:11:53.781 --rc genhtml_legend=1 00:11:53.781 --rc geninfo_all_blocks=1 00:11:53.781 --rc geninfo_unexecuted_blocks=1 00:11:53.781 00:11:53.781 ' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.781 --rc genhtml_branch_coverage=1 00:11:53.781 --rc genhtml_function_coverage=1 00:11:53.781 --rc genhtml_legend=1 00:11:53.781 --rc geninfo_all_blocks=1 00:11:53.781 --rc geninfo_unexecuted_blocks=1 00:11:53.781 00:11:53.781 ' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.781 --rc genhtml_branch_coverage=1 00:11:53.781 --rc genhtml_function_coverage=1 00:11:53.781 --rc genhtml_legend=1 00:11:53.781 --rc geninfo_all_blocks=1 00:11:53.781 --rc geninfo_unexecuted_blocks=1 00:11:53.781 00:11:53.781 ' 00:11:53.781 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.781 --rc genhtml_branch_coverage=1 00:11:53.781 --rc genhtml_function_coverage=1 00:11:53.781 --rc genhtml_legend=1 00:11:53.781 --rc geninfo_all_blocks=1 00:11:53.782 --rc geninfo_unexecuted_blocks=1 00:11:53.782 00:11:53.782 ' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.782 09:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.685 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:55.686 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:55.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:55.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:55.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:11:55.686 00:11:55.686 --- 10.0.0.2 ping statistics --- 00:11:55.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.686 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:55.686 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:55.945 00:11:55.945 --- 10.0.0.1 ping statistics --- 00:11:55.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.945 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2903240 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2903240 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2903240 ']' 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.945 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.945 [2024-11-17 09:12:00.823726] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:55.945 [2024-11-17 09:12:00.823864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.204 [2024-11-17 09:12:00.972392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.204 [2024-11-17 09:12:01.117150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.204 [2024-11-17 09:12:01.117223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.204 [2024-11-17 09:12:01.117249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.204 [2024-11-17 09:12:01.117273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.204 [2024-11-17 09:12:01.117292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.204 [2024-11-17 09:12:01.120035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.204 [2024-11-17 09:12:01.120102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.204 [2024-11-17 09:12:01.120155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.204 [2024-11-17 09:12:01.120160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 [2024-11-17 09:12:01.808175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 [2024-11-17 09:12:01.839431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.138 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.397 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.655 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.922 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.244 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.526 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.784 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.042 09:12:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.042 rmmod nvme_tcp 00:11:59.042 rmmod nvme_fabrics 00:11:59.042 rmmod nvme_keyring 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2903240 ']' 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2903240 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2903240 ']' 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2903240 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903240 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903240' 00:11:59.042 killing process with pid 2903240 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2903240 00:11:59.042 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2903240 00:12:00.417 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.418 09:12:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.321 00:12:02.321 real 0m8.917s 00:12:02.321 user 0m16.670s 00:12:02.321 sys 0m2.567s 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.321 ************************************ 00:12:02.321 END TEST nvmf_referrals 00:12:02.321 ************************************ 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.321 ************************************ 00:12:02.321 START TEST nvmf_connect_disconnect 00:12:02.321 ************************************ 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:02.321 * Looking for test storage... 00:12:02.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.321 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:02.580 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.581 --rc genhtml_branch_coverage=1 00:12:02.581 --rc genhtml_function_coverage=1 00:12:02.581 --rc genhtml_legend=1 00:12:02.581 --rc geninfo_all_blocks=1 00:12:02.581 --rc geninfo_unexecuted_blocks=1 00:12:02.581 00:12:02.581 ' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.581 --rc genhtml_branch_coverage=1 00:12:02.581 --rc genhtml_function_coverage=1 00:12:02.581 --rc genhtml_legend=1 00:12:02.581 --rc geninfo_all_blocks=1 00:12:02.581 --rc geninfo_unexecuted_blocks=1 00:12:02.581 00:12:02.581 ' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.581 --rc genhtml_branch_coverage=1 00:12:02.581 --rc genhtml_function_coverage=1 00:12:02.581 --rc genhtml_legend=1 00:12:02.581 --rc geninfo_all_blocks=1 00:12:02.581 --rc geninfo_unexecuted_blocks=1 00:12:02.581 00:12:02.581 ' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.581 --rc genhtml_branch_coverage=1 00:12:02.581 --rc genhtml_function_coverage=1 00:12:02.581 --rc genhtml_legend=1 00:12:02.581 --rc geninfo_all_blocks=1 00:12:02.581 --rc geninfo_unexecuted_blocks=1 00:12:02.581 00:12:02.581 ' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.581 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.582 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:05.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:05.113 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:05.113 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.113 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:05.113 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:12:05.114 00:12:05.114 --- 10.0.0.2 ping statistics --- 00:12:05.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.114 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:12:05.114 00:12:05.114 --- 10.0.0.1 ping statistics --- 00:12:05.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.114 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2906372 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2906372 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2906372 ']' 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.114 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.114 [2024-11-17 09:12:09.771884] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:05.114 [2024-11-17 09:12:09.772026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.114 [2024-11-17 09:12:09.916011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.114 [2024-11-17 09:12:10.056952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.114 [2024-11-17 09:12:10.057047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.114 [2024-11-17 09:12:10.057073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.114 [2024-11-17 09:12:10.057098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.114 [2024-11-17 09:12:10.057119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.114 [2024-11-17 09:12:10.059972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.114 [2024-11-17 09:12:10.060043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.114 [2024-11-17 09:12:10.060137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.114 [2024-11-17 09:12:10.060144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.049 [2024-11-17 09:12:10.814090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.049 [2024-11-17 09:12:10.936763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:06.049 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:08.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:02.275 rmmod nvme_tcp 00:16:02.275 rmmod nvme_fabrics 00:16:02.275 rmmod nvme_keyring 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2906372 ']' 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2906372 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2906372 ']' 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2906372 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.275 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906372 00:16:02.275 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.275 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.275 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906372' 00:16:02.275 killing process with pid 2906372 00:16:02.275 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2906372 00:16:02.275 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2906372 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.652 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:05.553 00:16:05.553 real 4m3.064s 00:16:05.553 user 15m21.561s 00:16:05.553 sys 0m37.377s 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:05.553 ************************************ 00:16:05.553 END TEST nvmf_connect_disconnect 00:16:05.553 ************************************ 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.553 ************************************ 00:16:05.553 START TEST nvmf_multitarget 00:16:05.553 ************************************ 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:05.553 * Looking for test storage... 00:16:05.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:05.553 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.554 --rc genhtml_branch_coverage=1 00:16:05.554 --rc genhtml_function_coverage=1 00:16:05.554 --rc genhtml_legend=1 00:16:05.554 --rc geninfo_all_blocks=1 00:16:05.554 --rc geninfo_unexecuted_blocks=1 00:16:05.554 00:16:05.554 ' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.554 --rc genhtml_branch_coverage=1 00:16:05.554 --rc genhtml_function_coverage=1 00:16:05.554 --rc genhtml_legend=1 00:16:05.554 --rc geninfo_all_blocks=1 00:16:05.554 --rc geninfo_unexecuted_blocks=1 00:16:05.554 00:16:05.554 ' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.554 --rc genhtml_branch_coverage=1 00:16:05.554 --rc genhtml_function_coverage=1 00:16:05.554 --rc genhtml_legend=1 00:16:05.554 --rc geninfo_all_blocks=1 00:16:05.554 --rc geninfo_unexecuted_blocks=1 00:16:05.554 00:16:05.554 ' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.554 --rc genhtml_branch_coverage=1 00:16:05.554 --rc genhtml_function_coverage=1 00:16:05.554 --rc genhtml_legend=1 00:16:05.554 --rc geninfo_all_blocks=1 00:16:05.554 --rc geninfo_unexecuted_blocks=1 00:16:05.554 00:16:05.554 ' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:05.554 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:05.813 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.714 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.714 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:07.714 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:07.714 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:07.714 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:07.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:07.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:07.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:07.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:07.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:16:07.715 00:16:07.715 --- 10.0.0.2 ping statistics --- 00:16:07.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.715 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:16:07.715 00:16:07.715 --- 10.0.0.1 ping statistics --- 00:16:07.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.715 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:07.715 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2938131 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2938131 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2938131 ']' 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.716 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:07.974 [2024-11-17 09:16:12.802729] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:07.974 [2024-11-17 09:16:12.802890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.974 [2024-11-17 09:16:12.960361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.232 [2024-11-17 09:16:13.103520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.232 [2024-11-17 09:16:13.103604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.232 [2024-11-17 09:16:13.103634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.232 [2024-11-17 09:16:13.103658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.232 [2024-11-17 09:16:13.103678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.232 [2024-11-17 09:16:13.106534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.232 [2024-11-17 09:16:13.106607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.232 [2024-11-17 09:16:13.106721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.232 [2024-11-17 09:16:13.106727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:09.167 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:09.167 "nvmf_tgt_1" 00:16:09.167 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:09.425 "nvmf_tgt_2" 00:16:09.425 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:09.425 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:09.425 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:09.425 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:09.425 true 00:16:09.425 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:09.683 true 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:09.683 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:09.683 rmmod nvme_tcp 00:16:09.683 rmmod nvme_fabrics 00:16:09.941 rmmod nvme_keyring 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2938131 ']' 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2938131 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2938131 ']' 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2938131 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938131 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938131' 00:16:09.941 killing process with pid 2938131 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2938131 00:16:09.941 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2938131 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.876 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.408 00:16:13.408 real 0m7.504s 00:16:13.408 user 0m12.154s 00:16:13.408 sys 0m2.170s 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 ************************************ 00:16:13.408 END TEST nvmf_multitarget 00:16:13.408 ************************************ 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 ************************************ 00:16:13.408 START TEST nvmf_rpc 00:16:13.408 ************************************ 00:16:13.408 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:13.408 * Looking for test storage... 00:16:13.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.408 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.409 --rc genhtml_branch_coverage=1 00:16:13.409 --rc genhtml_function_coverage=1 00:16:13.409 --rc genhtml_legend=1 00:16:13.409 --rc geninfo_all_blocks=1 00:16:13.409 --rc geninfo_unexecuted_blocks=1 00:16:13.409 00:16:13.409 ' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.409 --rc genhtml_branch_coverage=1 00:16:13.409 --rc genhtml_function_coverage=1 00:16:13.409 --rc genhtml_legend=1 00:16:13.409 --rc geninfo_all_blocks=1 00:16:13.409 --rc geninfo_unexecuted_blocks=1 00:16:13.409 00:16:13.409 ' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.409 --rc genhtml_branch_coverage=1 00:16:13.409 --rc genhtml_function_coverage=1 00:16:13.409 --rc genhtml_legend=1 00:16:13.409 --rc geninfo_all_blocks=1 00:16:13.409 --rc geninfo_unexecuted_blocks=1 00:16:13.409 00:16:13.409 ' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.409 --rc genhtml_branch_coverage=1 00:16:13.409 --rc genhtml_function_coverage=1 00:16:13.409 --rc genhtml_legend=1 00:16:13.409 --rc geninfo_all_blocks=1 00:16:13.409 --rc geninfo_unexecuted_blocks=1 00:16:13.409 00:16:13.409 ' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.409 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:15.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:15.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:15.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:15.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.348 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:15.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:16:15.630 00:16:15.630 --- 10.0.0.2 ping statistics --- 00:16:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.630 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:16:15.630 00:16:15.630 --- 10.0.0.1 ping statistics --- 00:16:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.630 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2940494 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2940494 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2940494 ']' 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.630 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.630 [2024-11-17 09:16:20.558014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:15.630 [2024-11-17 09:16:20.558173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.888 [2024-11-17 09:16:20.715558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.888 [2024-11-17 09:16:20.860133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.888 [2024-11-17 09:16:20.860206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.888 [2024-11-17 09:16:20.860232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.888 [2024-11-17 09:16:20.860256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.888 [2024-11-17 09:16:20.860276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.888 [2024-11-17 09:16:20.863284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.888 [2024-11-17 09:16:20.863354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.888 [2024-11-17 09:16:20.863473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.888 [2024-11-17 09:16:20.863475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.823 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:16.823 "tick_rate": 2700000000, 00:16:16.823 "poll_groups": [ 00:16:16.823 { 00:16:16.823 "name": "nvmf_tgt_poll_group_000", 00:16:16.823 "admin_qpairs": 0, 00:16:16.823 "io_qpairs": 0, 00:16:16.823 "current_admin_qpairs": 0, 00:16:16.823 "current_io_qpairs": 0, 00:16:16.823 "pending_bdev_io": 0, 00:16:16.823 "completed_nvme_io": 0, 00:16:16.823 "transports": [] 00:16:16.823 }, 00:16:16.823 { 00:16:16.823 "name": "nvmf_tgt_poll_group_001", 00:16:16.823 "admin_qpairs": 0, 00:16:16.823 "io_qpairs": 0, 00:16:16.823 "current_admin_qpairs": 0, 00:16:16.823 "current_io_qpairs": 0, 00:16:16.823 "pending_bdev_io": 0, 00:16:16.823 "completed_nvme_io": 0, 00:16:16.823 "transports": [] 00:16:16.823 }, 00:16:16.823 { 00:16:16.823 "name": "nvmf_tgt_poll_group_002", 00:16:16.823 "admin_qpairs": 0, 00:16:16.823 "io_qpairs": 0, 00:16:16.823 "current_admin_qpairs": 0, 00:16:16.823 "current_io_qpairs": 0, 00:16:16.823 "pending_bdev_io": 0, 00:16:16.823 "completed_nvme_io": 0, 00:16:16.823 "transports": [] 00:16:16.824 }, 00:16:16.824 { 00:16:16.824 "name": "nvmf_tgt_poll_group_003", 00:16:16.824 "admin_qpairs": 0, 00:16:16.824 "io_qpairs": 0, 00:16:16.824 "current_admin_qpairs": 0, 00:16:16.824 "current_io_qpairs": 0, 00:16:16.824 "pending_bdev_io": 0, 00:16:16.824 "completed_nvme_io": 0, 00:16:16.824 "transports": [] 00:16:16.824 } 00:16:16.824 ] 00:16:16.824 }' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.824 [2024-11-17 09:16:21.654610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:16.824 "tick_rate": 2700000000, 00:16:16.824 "poll_groups": [ 00:16:16.824 { 00:16:16.824 "name": "nvmf_tgt_poll_group_000", 00:16:16.824 "admin_qpairs": 0, 00:16:16.824 "io_qpairs": 0, 00:16:16.824 "current_admin_qpairs": 0, 00:16:16.824 "current_io_qpairs": 0, 00:16:16.824 "pending_bdev_io": 0, 00:16:16.824 "completed_nvme_io": 0, 00:16:16.824 "transports": [ 00:16:16.824 { 00:16:16.824 "trtype": "TCP" 00:16:16.824 } 00:16:16.824 ] 00:16:16.824 }, 00:16:16.824 { 00:16:16.824 "name": "nvmf_tgt_poll_group_001", 00:16:16.824 "admin_qpairs": 0, 00:16:16.824 "io_qpairs": 0, 00:16:16.824 "current_admin_qpairs": 0, 00:16:16.824 "current_io_qpairs": 0, 00:16:16.824 "pending_bdev_io": 0, 00:16:16.824 "completed_nvme_io": 0, 00:16:16.824 "transports": [ 00:16:16.824 { 00:16:16.824 "trtype": "TCP" 00:16:16.824 } 00:16:16.824 ] 00:16:16.824 }, 00:16:16.824 { 00:16:16.824 "name": "nvmf_tgt_poll_group_002", 00:16:16.824 "admin_qpairs": 0, 00:16:16.824 "io_qpairs": 0, 00:16:16.824 "current_admin_qpairs": 0, 00:16:16.824 "current_io_qpairs": 0, 00:16:16.824 "pending_bdev_io": 0, 00:16:16.824 "completed_nvme_io": 0, 00:16:16.824 "transports": [ 00:16:16.824 { 00:16:16.824 "trtype": "TCP" 00:16:16.824 } 00:16:16.824 ] 00:16:16.824 }, 00:16:16.824 { 00:16:16.824 "name": "nvmf_tgt_poll_group_003", 00:16:16.824 "admin_qpairs": 0, 00:16:16.824 "io_qpairs": 0, 00:16:16.824 "current_admin_qpairs": 0, 00:16:16.824 "current_io_qpairs": 0, 00:16:16.824 "pending_bdev_io": 0, 00:16:16.824 "completed_nvme_io": 0, 00:16:16.824 "transports": [ 00:16:16.824 { 00:16:16.824 "trtype": "TCP" 00:16:16.824 } 00:16:16.824 ] 00:16:16.824 } 00:16:16.824 ] 00:16:16.824 }' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.824 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.082 Malloc1 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.082 [2024-11-17 09:16:21.867013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:17.082 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:17.083 [2024-11-17 09:16:21.890342] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:17.083 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:17.083 could not add new controller: failed to write to nvme-fabrics device 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.083 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.649 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.649 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.649 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.649 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:17.649 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.179 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.179 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.179 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.179 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.180 [2024-11-17 09:16:24.790239] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:20.180 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:20.180 could not add new controller: failed to write to nvme-fabrics device 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.180 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.746 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.746 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.746 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.746 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.746 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:22.647 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 [2024-11-17 09:16:27.713888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.905 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.472 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.472 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:23.472 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.472 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:23.472 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:25.369 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.628 [2024-11-17 09:16:30.569800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.628 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.194 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.195 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:26.195 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.195 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:26.195 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.722 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.723 [2024-11-17 09:16:33.428441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.723 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:29.289 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.289 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:29.289 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.289 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:29.289 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.187 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.445 [2024-11-17 09:16:36.250240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.445 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.010 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:32.010 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:32.011 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.011 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:32.011 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:33.908 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.166 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 [2024-11-17 09:16:39.100375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.167 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.733 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.733 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.733 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.733 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:34.733 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.262 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 [2024-11-17 09:16:41.941414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 [2024-11-17 09:16:41.989480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 [2024-11-17 09:16:42.037681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 [2024-11-17 09:16:42.085818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.263 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 [2024-11-17 09:16:42.133980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:37.264 "tick_rate": 2700000000, 00:16:37.264 "poll_groups": [ 00:16:37.264 { 00:16:37.264 "name": "nvmf_tgt_poll_group_000", 00:16:37.264 "admin_qpairs": 2, 00:16:37.264 "io_qpairs": 84, 00:16:37.264 "current_admin_qpairs": 0, 00:16:37.264 "current_io_qpairs": 0, 00:16:37.264 "pending_bdev_io": 0, 00:16:37.264 "completed_nvme_io": 182, 00:16:37.264 "transports": [ 00:16:37.264 { 00:16:37.264 "trtype": "TCP" 00:16:37.264 } 00:16:37.264 ] 00:16:37.264 }, 00:16:37.264 { 00:16:37.264 "name": "nvmf_tgt_poll_group_001", 00:16:37.264 "admin_qpairs": 2, 00:16:37.264 "io_qpairs": 84, 00:16:37.264 "current_admin_qpairs": 0, 00:16:37.264 "current_io_qpairs": 0, 00:16:37.264 "pending_bdev_io": 0, 00:16:37.264 "completed_nvme_io": 183, 00:16:37.264 "transports": [ 00:16:37.264 { 00:16:37.264 "trtype": "TCP" 00:16:37.264 } 00:16:37.264 ] 00:16:37.264 }, 00:16:37.264 { 00:16:37.264 "name": "nvmf_tgt_poll_group_002", 00:16:37.264 "admin_qpairs": 1, 00:16:37.264 "io_qpairs": 84, 00:16:37.264 "current_admin_qpairs": 0, 00:16:37.264 "current_io_qpairs": 0, 00:16:37.264 "pending_bdev_io": 0, 00:16:37.264 "completed_nvme_io": 185, 00:16:37.264 "transports": [ 00:16:37.264 { 00:16:37.264 "trtype": "TCP" 00:16:37.264 } 00:16:37.264 ] 00:16:37.264 }, 00:16:37.264 { 00:16:37.264 "name": "nvmf_tgt_poll_group_003", 00:16:37.264 "admin_qpairs": 2, 00:16:37.264 "io_qpairs": 84, 00:16:37.264 "current_admin_qpairs": 0, 00:16:37.264 "current_io_qpairs": 0, 00:16:37.264 "pending_bdev_io": 0, 00:16:37.264 "completed_nvme_io": 136, 00:16:37.264 "transports": [ 00:16:37.264 { 00:16:37.264 "trtype": "TCP" 00:16:37.264 } 00:16:37.264 ] 00:16:37.264 } 00:16:37.264 ] 00:16:37.264 }' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.264 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.264 rmmod nvme_tcp 00:16:37.264 rmmod nvme_fabrics 00:16:37.523 rmmod nvme_keyring 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2940494 ']' 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2940494 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2940494 ']' 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2940494 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2940494 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2940494' 00:16:37.523 killing process with pid 2940494 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2940494 00:16:37.523 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2940494 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.897 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.898 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:40.803 00:16:40.803 real 0m27.710s 00:16:40.803 user 1m28.963s 00:16:40.803 sys 0m4.769s 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.803 ************************************ 00:16:40.803 END TEST nvmf_rpc 00:16:40.803 ************************************ 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.803 ************************************ 00:16:40.803 START TEST nvmf_invalid 00:16:40.803 ************************************ 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:40.803 * Looking for test storage... 00:16:40.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:40.803 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.062 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.063 --rc genhtml_branch_coverage=1 00:16:41.063 --rc genhtml_function_coverage=1 00:16:41.063 --rc genhtml_legend=1 00:16:41.063 --rc geninfo_all_blocks=1 00:16:41.063 --rc geninfo_unexecuted_blocks=1 00:16:41.063 00:16:41.063 ' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.063 --rc genhtml_branch_coverage=1 00:16:41.063 --rc genhtml_function_coverage=1 00:16:41.063 --rc genhtml_legend=1 00:16:41.063 --rc geninfo_all_blocks=1 00:16:41.063 --rc geninfo_unexecuted_blocks=1 00:16:41.063 00:16:41.063 ' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.063 --rc genhtml_branch_coverage=1 00:16:41.063 --rc genhtml_function_coverage=1 00:16:41.063 --rc genhtml_legend=1 00:16:41.063 --rc geninfo_all_blocks=1 00:16:41.063 --rc geninfo_unexecuted_blocks=1 00:16:41.063 00:16:41.063 ' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.063 --rc genhtml_branch_coverage=1 00:16:41.063 --rc genhtml_function_coverage=1 00:16:41.063 --rc genhtml_legend=1 00:16:41.063 --rc geninfo_all_blocks=1 00:16:41.063 --rc geninfo_unexecuted_blocks=1 00:16:41.063 00:16:41.063 ' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:41.063 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:43.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:43.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.594 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:43.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:43.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.595 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:43.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:16:43.595 00:16:43.595 --- 10.0.0.2 ping statistics --- 00:16:43.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.595 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:16:43.595 00:16:43.595 --- 10.0.0.1 ping statistics --- 00:16:43.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.595 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2945239 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2945239 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2945239 ']' 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.595 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:43.595 [2024-11-17 09:16:48.234439] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:43.595 [2024-11-17 09:16:48.234571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.595 [2024-11-17 09:16:48.381304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.595 [2024-11-17 09:16:48.505407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.595 [2024-11-17 09:16:48.505501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.595 [2024-11-17 09:16:48.505524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.595 [2024-11-17 09:16:48.505546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.595 [2024-11-17 09:16:48.505562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.595 [2024-11-17 09:16:48.508286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.595 [2024-11-17 09:16:48.508338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.595 [2024-11-17 09:16:48.508404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.595 [2024-11-17 09:16:48.508408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:44.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14324 00:16:44.529 [2024-11-17 09:16:49.540112] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:44.786 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:44.786 { 00:16:44.786 "nqn": "nqn.2016-06.io.spdk:cnode14324", 00:16:44.786 "tgt_name": "foobar", 00:16:44.786 "method": "nvmf_create_subsystem", 00:16:44.786 "req_id": 1 00:16:44.786 } 00:16:44.786 Got JSON-RPC error response 00:16:44.786 response: 00:16:44.786 { 00:16:44.786 "code": -32603, 00:16:44.786 "message": "Unable to find target foobar" 00:16:44.786 }' 00:16:44.786 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:44.786 { 00:16:44.786 "nqn": "nqn.2016-06.io.spdk:cnode14324", 00:16:44.786 "tgt_name": "foobar", 00:16:44.786 "method": "nvmf_create_subsystem", 00:16:44.786 "req_id": 1 00:16:44.786 } 00:16:44.786 Got JSON-RPC error response 00:16:44.786 response: 00:16:44.786 { 00:16:44.786 "code": -32603, 00:16:44.786 "message": "Unable to find target foobar" 00:16:44.786 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:44.786 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:44.786 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21447 00:16:45.044 [2024-11-17 09:16:49.845173] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21447: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:45.044 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:45.044 { 00:16:45.044 "nqn": "nqn.2016-06.io.spdk:cnode21447", 00:16:45.044 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:45.044 "method": "nvmf_create_subsystem", 00:16:45.044 "req_id": 1 00:16:45.044 } 00:16:45.044 Got JSON-RPC error response 00:16:45.044 response: 00:16:45.044 { 00:16:45.044 "code": -32602, 00:16:45.044 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:45.044 }' 00:16:45.044 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:45.044 { 00:16:45.044 "nqn": "nqn.2016-06.io.spdk:cnode21447", 00:16:45.044 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:45.044 "method": "nvmf_create_subsystem", 00:16:45.044 "req_id": 1 00:16:45.044 } 00:16:45.044 Got JSON-RPC error response 00:16:45.044 response: 00:16:45.044 { 00:16:45.044 "code": -32602, 00:16:45.044 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:45.044 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:45.044 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:45.044 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14353 00:16:45.302 [2024-11-17 09:16:50.166333] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14353: invalid model number 'SPDK_Controller' 00:16:45.302 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:45.302 { 00:16:45.302 "nqn": "nqn.2016-06.io.spdk:cnode14353", 00:16:45.302 "model_number": "SPDK_Controller\u001f", 00:16:45.302 "method": "nvmf_create_subsystem", 00:16:45.302 "req_id": 1 00:16:45.302 } 00:16:45.302 Got JSON-RPC error response 00:16:45.302 response: 00:16:45.302 { 00:16:45.302 "code": -32602, 00:16:45.302 "message": "Invalid MN SPDK_Controller\u001f" 00:16:45.302 }' 00:16:45.302 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:45.302 { 00:16:45.302 "nqn": "nqn.2016-06.io.spdk:cnode14353", 00:16:45.302 "model_number": "SPDK_Controller\u001f", 00:16:45.302 "method": "nvmf_create_subsystem", 00:16:45.302 "req_id": 1 00:16:45.303 } 00:16:45.303 Got JSON-RPC error response 00:16:45.303 response: 00:16:45.303 { 00:16:45.303 "code": -32602, 00:16:45.303 "message": "Invalid MN SPDK_Controller\u001f" 00:16:45.303 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:45.303 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:45.304 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:45.304 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.304 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.304 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:16:45.304 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'N0MGTi[p~kDA%OIOsh'\''n/' 00:16:45.304 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'N0MGTi[p~kDA%OIOsh'\''n/' nqn.2016-06.io.spdk:cnode18424 00:16:45.562 [2024-11-17 09:16:50.523569] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18424: invalid serial number 'N0MGTi[p~kDA%OIOsh'n/' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:45.562 { 00:16:45.562 "nqn": "nqn.2016-06.io.spdk:cnode18424", 00:16:45.562 "serial_number": "N0MGTi[p~kDA%OIOsh'\''n/", 00:16:45.562 "method": "nvmf_create_subsystem", 00:16:45.562 "req_id": 1 00:16:45.562 } 00:16:45.562 Got JSON-RPC error response 00:16:45.562 response: 00:16:45.562 { 00:16:45.562 "code": -32602, 00:16:45.562 "message": "Invalid SN N0MGTi[p~kDA%OIOsh'\''n/" 00:16:45.562 }' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:45.562 { 00:16:45.562 "nqn": "nqn.2016-06.io.spdk:cnode18424", 00:16:45.562 "serial_number": "N0MGTi[p~kDA%OIOsh'n/", 00:16:45.562 "method": "nvmf_create_subsystem", 00:16:45.562 "req_id": 1 00:16:45.562 } 00:16:45.562 Got JSON-RPC error response 00:16:45.562 response: 00:16:45.562 { 00:16:45.562 "code": -32602, 00:16:45.562 "message": "Invalid SN N0MGTi[p~kDA%OIOsh'n/" 00:16:45.562 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.562 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:45.822 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'\''z;iqZzqD' 00:16:45.823 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'\''z;iqZzqD' nqn.2016-06.io.spdk:cnode11438 00:16:46.081 [2024-11-17 09:16:50.908839] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11438: invalid model number '_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'z;iqZzqD' 00:16:46.081 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:46.081 { 00:16:46.081 "nqn": "nqn.2016-06.io.spdk:cnode11438", 00:16:46.081 "model_number": "\u007f_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'\''z;iqZzqD", 00:16:46.081 "method": "nvmf_create_subsystem", 00:16:46.081 "req_id": 1 00:16:46.081 } 00:16:46.081 Got JSON-RPC error response 00:16:46.081 response: 00:16:46.081 { 00:16:46.081 "code": -32602, 00:16:46.081 "message": "Invalid MN \u007f_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'\''z;iqZzqD" 00:16:46.081 }' 00:16:46.081 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:46.081 { 00:16:46.081 "nqn": "nqn.2016-06.io.spdk:cnode11438", 00:16:46.081 "model_number": "\u007f_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'z;iqZzqD", 00:16:46.081 "method": "nvmf_create_subsystem", 00:16:46.081 "req_id": 1 00:16:46.081 } 00:16:46.081 Got JSON-RPC error response 00:16:46.081 response: 00:16:46.081 { 00:16:46.081 "code": -32602, 00:16:46.081 "message": "Invalid MN \u007f_PNByzT^UAzGKG0379j]$a!HoUvd>Ra'z;iqZzqD" 00:16:46.081 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:46.081 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:46.340 [2024-11-17 09:16:51.173877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.340 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:46.598 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:46.598 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:46.598 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:46.598 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:46.598 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:46.855 [2024-11-17 09:16:51.737251] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:46.855 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:46.855 { 00:16:46.855 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:46.855 "listen_address": { 00:16:46.855 "trtype": "tcp", 00:16:46.855 "traddr": "", 00:16:46.855 "trsvcid": "4421" 00:16:46.855 }, 00:16:46.855 "method": "nvmf_subsystem_remove_listener", 00:16:46.855 "req_id": 1 00:16:46.855 } 00:16:46.855 Got JSON-RPC error response 00:16:46.855 response: 00:16:46.855 { 00:16:46.855 "code": -32602, 00:16:46.855 "message": "Invalid parameters" 00:16:46.855 }' 00:16:46.856 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:46.856 { 00:16:46.856 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:46.856 "listen_address": { 00:16:46.856 "trtype": "tcp", 00:16:46.856 "traddr": "", 00:16:46.856 "trsvcid": "4421" 00:16:46.856 }, 00:16:46.856 "method": "nvmf_subsystem_remove_listener", 00:16:46.856 "req_id": 1 00:16:46.856 } 00:16:46.856 Got JSON-RPC error response 00:16:46.856 response: 00:16:46.856 { 00:16:46.856 "code": -32602, 00:16:46.856 "message": "Invalid parameters" 00:16:46.856 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:46.856 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10535 -i 0 00:16:47.112 [2024-11-17 09:16:52.014141] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10535: invalid cntlid range [0-65519] 00:16:47.112 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:47.112 { 00:16:47.112 "nqn": "nqn.2016-06.io.spdk:cnode10535", 00:16:47.112 "min_cntlid": 0, 00:16:47.112 "method": "nvmf_create_subsystem", 00:16:47.112 "req_id": 1 00:16:47.112 } 00:16:47.112 Got JSON-RPC error response 00:16:47.112 response: 00:16:47.112 { 00:16:47.112 "code": -32602, 00:16:47.112 "message": "Invalid cntlid range [0-65519]" 00:16:47.112 }' 00:16:47.112 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:47.112 { 00:16:47.112 "nqn": "nqn.2016-06.io.spdk:cnode10535", 00:16:47.112 "min_cntlid": 0, 00:16:47.112 "method": "nvmf_create_subsystem", 00:16:47.112 "req_id": 1 00:16:47.112 } 00:16:47.112 Got JSON-RPC error response 00:16:47.112 response: 00:16:47.112 { 00:16:47.112 "code": -32602, 00:16:47.112 "message": "Invalid cntlid range [0-65519]" 00:16:47.112 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:47.112 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9882 -i 65520 00:16:47.370 [2024-11-17 09:16:52.291072] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9882: invalid cntlid range [65520-65519] 00:16:47.370 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:47.370 { 00:16:47.370 "nqn": "nqn.2016-06.io.spdk:cnode9882", 00:16:47.370 "min_cntlid": 65520, 00:16:47.370 "method": "nvmf_create_subsystem", 00:16:47.370 "req_id": 1 00:16:47.370 } 00:16:47.370 Got JSON-RPC error response 00:16:47.370 response: 00:16:47.370 { 00:16:47.370 "code": -32602, 00:16:47.370 "message": "Invalid cntlid range [65520-65519]" 00:16:47.370 }' 00:16:47.370 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:47.370 { 00:16:47.370 "nqn": "nqn.2016-06.io.spdk:cnode9882", 00:16:47.370 "min_cntlid": 65520, 00:16:47.370 "method": "nvmf_create_subsystem", 00:16:47.370 "req_id": 1 00:16:47.370 } 00:16:47.370 Got JSON-RPC error response 00:16:47.370 response: 00:16:47.370 { 00:16:47.370 "code": -32602, 00:16:47.370 "message": "Invalid cntlid range [65520-65519]" 00:16:47.370 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:47.370 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1242 -I 0 00:16:47.627 [2024-11-17 09:16:52.560034] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1242: invalid cntlid range [1-0] 00:16:47.627 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:47.627 { 00:16:47.627 "nqn": "nqn.2016-06.io.spdk:cnode1242", 00:16:47.628 "max_cntlid": 0, 00:16:47.628 "method": "nvmf_create_subsystem", 00:16:47.628 "req_id": 1 00:16:47.628 } 00:16:47.628 Got JSON-RPC error response 00:16:47.628 response: 00:16:47.628 { 00:16:47.628 "code": -32602, 00:16:47.628 "message": "Invalid cntlid range [1-0]" 00:16:47.628 }' 00:16:47.628 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:47.628 { 00:16:47.628 "nqn": "nqn.2016-06.io.spdk:cnode1242", 00:16:47.628 "max_cntlid": 0, 00:16:47.628 "method": "nvmf_create_subsystem", 00:16:47.628 "req_id": 1 00:16:47.628 } 00:16:47.628 Got JSON-RPC error response 00:16:47.628 response: 00:16:47.628 { 00:16:47.628 "code": -32602, 00:16:47.628 "message": "Invalid cntlid range [1-0]" 00:16:47.628 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:47.628 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9584 -I 65520 00:16:47.886 [2024-11-17 09:16:52.845071] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9584: invalid cntlid range [1-65520] 00:16:47.886 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:47.886 { 00:16:47.886 "nqn": "nqn.2016-06.io.spdk:cnode9584", 00:16:47.886 "max_cntlid": 65520, 00:16:47.886 "method": "nvmf_create_subsystem", 00:16:47.886 "req_id": 1 00:16:47.886 } 00:16:47.886 Got JSON-RPC error response 00:16:47.886 response: 00:16:47.886 { 00:16:47.886 "code": -32602, 00:16:47.886 "message": "Invalid cntlid range [1-65520]" 00:16:47.886 }' 00:16:47.886 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:47.886 { 00:16:47.886 "nqn": "nqn.2016-06.io.spdk:cnode9584", 00:16:47.886 "max_cntlid": 65520, 00:16:47.886 "method": "nvmf_create_subsystem", 00:16:47.886 "req_id": 1 00:16:47.886 } 00:16:47.886 Got JSON-RPC error response 00:16:47.886 response: 00:16:47.886 { 00:16:47.886 "code": -32602, 00:16:47.886 "message": "Invalid cntlid range [1-65520]" 00:16:47.886 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:47.886 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30566 -i 6 -I 5 00:16:48.144 [2024-11-17 09:16:53.113989] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30566: invalid cntlid range [6-5] 00:16:48.144 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:48.144 { 00:16:48.144 "nqn": "nqn.2016-06.io.spdk:cnode30566", 00:16:48.144 "min_cntlid": 6, 00:16:48.144 "max_cntlid": 5, 00:16:48.144 "method": "nvmf_create_subsystem", 00:16:48.144 "req_id": 1 00:16:48.144 } 00:16:48.144 Got JSON-RPC error response 00:16:48.144 response: 00:16:48.144 { 00:16:48.144 "code": -32602, 00:16:48.144 "message": "Invalid cntlid range [6-5]" 00:16:48.144 }' 00:16:48.144 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:48.144 { 00:16:48.144 "nqn": "nqn.2016-06.io.spdk:cnode30566", 00:16:48.144 "min_cntlid": 6, 00:16:48.144 "max_cntlid": 5, 00:16:48.144 "method": "nvmf_create_subsystem", 00:16:48.144 "req_id": 1 00:16:48.144 } 00:16:48.144 Got JSON-RPC error response 00:16:48.144 response: 00:16:48.144 { 00:16:48.144 "code": -32602, 00:16:48.144 "message": "Invalid cntlid range [6-5]" 00:16:48.144 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:48.144 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:48.402 { 00:16:48.402 "name": "foobar", 00:16:48.402 "method": "nvmf_delete_target", 00:16:48.402 "req_id": 1 00:16:48.402 } 00:16:48.402 Got JSON-RPC error response 00:16:48.402 response: 00:16:48.402 { 00:16:48.402 "code": -32602, 00:16:48.402 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:48.402 }' 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:48.402 { 00:16:48.402 "name": "foobar", 00:16:48.402 "method": "nvmf_delete_target", 00:16:48.402 "req_id": 1 00:16:48.402 } 00:16:48.402 Got JSON-RPC error response 00:16:48.402 response: 00:16:48.402 { 00:16:48.402 "code": -32602, 00:16:48.402 "message": "The specified target doesn't exist, cannot delete it." 00:16:48.402 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.402 rmmod nvme_tcp 00:16:48.402 rmmod nvme_fabrics 00:16:48.402 rmmod nvme_keyring 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2945239 ']' 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2945239 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2945239 ']' 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2945239 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2945239 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2945239' 00:16:48.402 killing process with pid 2945239 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2945239 00:16:48.402 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2945239 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.774 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:51.726 00:16:51.726 real 0m10.765s 00:16:51.726 user 0m27.262s 00:16:51.726 sys 0m2.745s 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:51.726 ************************************ 00:16:51.726 END TEST nvmf_invalid 00:16:51.726 ************************************ 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.726 ************************************ 00:16:51.726 START TEST nvmf_connect_stress 00:16:51.726 ************************************ 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:51.726 * Looking for test storage... 00:16:51.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.726 --rc genhtml_branch_coverage=1 00:16:51.726 --rc genhtml_function_coverage=1 00:16:51.726 --rc genhtml_legend=1 00:16:51.726 --rc geninfo_all_blocks=1 00:16:51.726 --rc geninfo_unexecuted_blocks=1 00:16:51.726 00:16:51.726 ' 00:16:51.726 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.727 --rc genhtml_branch_coverage=1 00:16:51.727 --rc genhtml_function_coverage=1 00:16:51.727 --rc genhtml_legend=1 00:16:51.727 --rc geninfo_all_blocks=1 00:16:51.727 --rc geninfo_unexecuted_blocks=1 00:16:51.727 00:16:51.727 ' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.727 --rc genhtml_branch_coverage=1 00:16:51.727 --rc genhtml_function_coverage=1 00:16:51.727 --rc genhtml_legend=1 00:16:51.727 --rc geninfo_all_blocks=1 00:16:51.727 --rc geninfo_unexecuted_blocks=1 00:16:51.727 00:16:51.727 ' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.727 --rc genhtml_branch_coverage=1 00:16:51.727 --rc genhtml_function_coverage=1 00:16:51.727 --rc genhtml_legend=1 00:16:51.727 --rc geninfo_all_blocks=1 00:16:51.727 --rc geninfo_unexecuted_blocks=1 00:16:51.727 00:16:51.727 ' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:51.727 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.259 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:54.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:16:54.260 00:16:54.260 --- 10.0.0.2 ping statistics --- 00:16:54.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.260 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:16:54.260 00:16:54.260 --- 10.0.0.1 ping statistics --- 00:16:54.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.260 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.260 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2948146 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2948146 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2948146 ']' 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.261 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.261 [2024-11-17 09:16:59.037329] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:54.261 [2024-11-17 09:16:59.037511] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.261 [2024-11-17 09:16:59.211849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.519 [2024-11-17 09:16:59.356892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.519 [2024-11-17 09:16:59.356983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.519 [2024-11-17 09:16:59.357009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.519 [2024-11-17 09:16:59.357034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.519 [2024-11-17 09:16:59.357055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.519 [2024-11-17 09:16:59.359853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.519 [2024-11-17 09:16:59.359908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.519 [2024-11-17 09:16:59.359914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.084 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.084 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:55.084 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:55.084 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:55.084 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.343 [2024-11-17 09:17:00.105099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.343 [2024-11-17 09:17:00.125307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.343 NULL1 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2948301 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.343 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.601 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.601 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:55.601 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.601 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.601 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.859 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.859 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:55.859 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.859 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.859 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.425 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.425 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:56.425 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.425 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.425 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.683 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.683 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:56.683 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.683 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.683 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.941 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.941 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:56.941 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.941 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.941 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.199 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.199 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:57.199 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.199 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.199 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.764 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.764 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:57.764 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.765 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.765 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.023 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.023 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:58.023 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.023 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.023 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.281 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.281 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:58.281 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.281 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.281 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.539 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.539 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:58.539 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.539 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.539 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.797 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.797 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:58.797 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.797 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.797 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.363 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.363 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:59.363 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.363 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.363 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.621 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.621 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:59.621 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.621 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.621 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.879 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.879 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:16:59.879 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.879 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.879 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.137 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.137 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:00.137 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.137 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.137 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.395 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.395 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:00.395 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.395 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.395 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.961 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.961 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:00.961 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.961 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.961 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.219 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.219 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:01.219 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.219 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.219 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.477 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.477 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:01.477 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.477 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.477 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.735 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.735 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:01.735 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.735 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.735 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.301 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.301 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:02.301 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.301 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.301 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.559 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.559 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:02.559 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.559 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.559 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.816 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.816 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:02.816 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.816 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.816 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.073 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.073 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:03.073 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.073 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.073 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.329 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.329 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:03.329 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.329 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.329 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.892 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.892 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:03.892 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.892 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.892 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.149 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.149 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:04.149 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.149 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.149 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.406 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.406 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:04.406 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.406 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.406 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.664 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.664 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:04.664 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.664 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.664 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.228 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.228 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:05.228 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.228 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.228 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.486 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.486 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:05.486 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.486 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.486 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.486 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2948301 00:17:05.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2948301) - No such process 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2948301 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.745 rmmod nvme_tcp 00:17:05.745 rmmod nvme_fabrics 00:17:05.745 rmmod nvme_keyring 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2948146 ']' 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2948146 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2948146 ']' 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2948146 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948146 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948146' 00:17:05.745 killing process with pid 2948146 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2948146 00:17:05.745 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2948146 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.119 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:09.022 00:17:09.022 real 0m17.312s 00:17:09.022 user 0m43.095s 00:17:09.022 sys 0m6.187s 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.022 ************************************ 00:17:09.022 END TEST nvmf_connect_stress 00:17:09.022 ************************************ 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.022 ************************************ 00:17:09.022 START TEST nvmf_fused_ordering 00:17:09.022 ************************************ 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:09.022 * Looking for test storage... 00:17:09.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:09.022 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.281 --rc genhtml_branch_coverage=1 00:17:09.281 --rc genhtml_function_coverage=1 00:17:09.281 --rc genhtml_legend=1 00:17:09.281 --rc geninfo_all_blocks=1 00:17:09.281 --rc geninfo_unexecuted_blocks=1 00:17:09.281 00:17:09.281 ' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.281 --rc genhtml_branch_coverage=1 00:17:09.281 --rc genhtml_function_coverage=1 00:17:09.281 --rc genhtml_legend=1 00:17:09.281 --rc geninfo_all_blocks=1 00:17:09.281 --rc geninfo_unexecuted_blocks=1 00:17:09.281 00:17:09.281 ' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.281 --rc genhtml_branch_coverage=1 00:17:09.281 --rc genhtml_function_coverage=1 00:17:09.281 --rc genhtml_legend=1 00:17:09.281 --rc geninfo_all_blocks=1 00:17:09.281 --rc geninfo_unexecuted_blocks=1 00:17:09.281 00:17:09.281 ' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:09.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.281 --rc genhtml_branch_coverage=1 00:17:09.281 --rc genhtml_function_coverage=1 00:17:09.281 --rc genhtml_legend=1 00:17:09.281 --rc geninfo_all_blocks=1 00:17:09.281 --rc geninfo_unexecuted_blocks=1 00:17:09.281 00:17:09.281 ' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.281 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:09.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:09.282 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:11.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:11.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:11.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:11.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.187 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.188 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:17:11.446 00:17:11.446 --- 10.0.0.2 ping statistics --- 00:17:11.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.446 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:17:11.446 00:17:11.446 --- 10.0.0.1 ping statistics --- 00:17:11.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.446 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2951583 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2951583 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2951583 ']' 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.446 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.447 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.447 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.447 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:11.705 [2024-11-17 09:17:16.529017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:11.705 [2024-11-17 09:17:16.529163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.705 [2024-11-17 09:17:16.674466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.964 [2024-11-17 09:17:16.806851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.964 [2024-11-17 09:17:16.806945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.964 [2024-11-17 09:17:16.806971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.964 [2024-11-17 09:17:16.806996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.964 [2024-11-17 09:17:16.807015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.964 [2024-11-17 09:17:16.808667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.530 [2024-11-17 09:17:17.532412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.530 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.789 [2024-11-17 09:17:17.548708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.789 NULL1 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.789 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:12.789 [2024-11-17 09:17:17.623679] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:12.789 [2024-11-17 09:17:17.623785] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951732 ] 00:17:13.723 Attached to nqn.2016-06.io.spdk:cnode1 00:17:13.723 Namespace ID: 1 size: 1GB 00:17:13.723 fused_ordering(0) 00:17:13.723 fused_ordering(1) 00:17:13.723 fused_ordering(2) 00:17:13.723 fused_ordering(3) 00:17:13.723 fused_ordering(4) 00:17:13.723 fused_ordering(5) 00:17:13.723 fused_ordering(6) 00:17:13.723 fused_ordering(7) 00:17:13.723 fused_ordering(8) 00:17:13.723 fused_ordering(9) 00:17:13.723 fused_ordering(10) 00:17:13.723 fused_ordering(11) 00:17:13.723 fused_ordering(12) 00:17:13.723 fused_ordering(13) 00:17:13.723 fused_ordering(14) 00:17:13.723 fused_ordering(15) 00:17:13.723 fused_ordering(16) 00:17:13.723 fused_ordering(17) 00:17:13.723 fused_ordering(18) 00:17:13.723 fused_ordering(19) 00:17:13.723 fused_ordering(20) 00:17:13.723 fused_ordering(21) 00:17:13.723 fused_ordering(22) 00:17:13.723 fused_ordering(23) 00:17:13.723 fused_ordering(24) 00:17:13.723 fused_ordering(25) 00:17:13.723 fused_ordering(26) 00:17:13.723 fused_ordering(27) 00:17:13.723 fused_ordering(28) 00:17:13.723 fused_ordering(29) 00:17:13.723 fused_ordering(30) 00:17:13.723 fused_ordering(31) 00:17:13.723 fused_ordering(32) 00:17:13.723 fused_ordering(33) 00:17:13.723 fused_ordering(34) 00:17:13.723 fused_ordering(35) 00:17:13.723 fused_ordering(36) 00:17:13.723 fused_ordering(37) 00:17:13.723 fused_ordering(38) 00:17:13.723 fused_ordering(39) 00:17:13.723 fused_ordering(40) 00:17:13.723 fused_ordering(41) 00:17:13.723 fused_ordering(42) 00:17:13.723 fused_ordering(43) 00:17:13.723 fused_ordering(44) 00:17:13.723 fused_ordering(45) 00:17:13.723 fused_ordering(46) 00:17:13.723 fused_ordering(47) 00:17:13.723 fused_ordering(48) 00:17:13.723 fused_ordering(49) 00:17:13.723 fused_ordering(50) 00:17:13.723 fused_ordering(51) 00:17:13.723 fused_ordering(52) 00:17:13.723 fused_ordering(53) 00:17:13.723 fused_ordering(54) 00:17:13.723 fused_ordering(55) 00:17:13.723 fused_ordering(56) 00:17:13.723 fused_ordering(57) 00:17:13.723 fused_ordering(58) 00:17:13.723 fused_ordering(59) 00:17:13.723 fused_ordering(60) 00:17:13.723 fused_ordering(61) 00:17:13.723 fused_ordering(62) 00:17:13.723 fused_ordering(63) 00:17:13.723 fused_ordering(64) 00:17:13.723 fused_ordering(65) 00:17:13.723 fused_ordering(66) 00:17:13.723 fused_ordering(67) 00:17:13.723 fused_ordering(68) 00:17:13.723 fused_ordering(69) 00:17:13.723 fused_ordering(70) 00:17:13.723 fused_ordering(71) 00:17:13.723 fused_ordering(72) 00:17:13.723 fused_ordering(73) 00:17:13.723 fused_ordering(74) 00:17:13.723 fused_ordering(75) 00:17:13.723 fused_ordering(76) 00:17:13.723 fused_ordering(77) 00:17:13.723 fused_ordering(78) 00:17:13.723 fused_ordering(79) 00:17:13.723 fused_ordering(80) 00:17:13.723 fused_ordering(81) 00:17:13.723 fused_ordering(82) 00:17:13.723 fused_ordering(83) 00:17:13.723 fused_ordering(84) 00:17:13.723 fused_ordering(85) 00:17:13.723 fused_ordering(86) 00:17:13.723 fused_ordering(87) 00:17:13.723 fused_ordering(88) 00:17:13.723 fused_ordering(89) 00:17:13.723 fused_ordering(90) 00:17:13.723 fused_ordering(91) 00:17:13.723 fused_ordering(92) 00:17:13.723 fused_ordering(93) 00:17:13.723 fused_ordering(94) 00:17:13.723 fused_ordering(95) 00:17:13.723 fused_ordering(96) 00:17:13.723 fused_ordering(97) 00:17:13.723 fused_ordering(98) 00:17:13.723 fused_ordering(99) 00:17:13.723 fused_ordering(100) 00:17:13.723 fused_ordering(101) 00:17:13.723 fused_ordering(102) 00:17:13.723 fused_ordering(103) 00:17:13.723 fused_ordering(104) 00:17:13.723 fused_ordering(105) 00:17:13.723 fused_ordering(106) 00:17:13.723 fused_ordering(107) 00:17:13.723 fused_ordering(108) 00:17:13.724 fused_ordering(109) 00:17:13.724 fused_ordering(110) 00:17:13.724 fused_ordering(111) 00:17:13.724 fused_ordering(112) 00:17:13.724 fused_ordering(113) 00:17:13.724 fused_ordering(114) 00:17:13.724 fused_ordering(115) 00:17:13.724 fused_ordering(116) 00:17:13.724 fused_ordering(117) 00:17:13.724 fused_ordering(118) 00:17:13.724 fused_ordering(119) 00:17:13.724 fused_ordering(120) 00:17:13.724 fused_ordering(121) 00:17:13.724 fused_ordering(122) 00:17:13.724 fused_ordering(123) 00:17:13.724 fused_ordering(124) 00:17:13.724 fused_ordering(125) 00:17:13.724 fused_ordering(126) 00:17:13.724 fused_ordering(127) 00:17:13.724 fused_ordering(128) 00:17:13.724 fused_ordering(129) 00:17:13.724 fused_ordering(130) 00:17:13.724 fused_ordering(131) 00:17:13.724 fused_ordering(132) 00:17:13.724 fused_ordering(133) 00:17:13.724 fused_ordering(134) 00:17:13.724 fused_ordering(135) 00:17:13.724 fused_ordering(136) 00:17:13.724 fused_ordering(137) 00:17:13.724 fused_ordering(138) 00:17:13.724 fused_ordering(139) 00:17:13.724 fused_ordering(140) 00:17:13.724 fused_ordering(141) 00:17:13.724 fused_ordering(142) 00:17:13.724 fused_ordering(143) 00:17:13.724 fused_ordering(144) 00:17:13.724 fused_ordering(145) 00:17:13.724 fused_ordering(146) 00:17:13.724 fused_ordering(147) 00:17:13.724 fused_ordering(148) 00:17:13.724 fused_ordering(149) 00:17:13.724 fused_ordering(150) 00:17:13.724 fused_ordering(151) 00:17:13.724 fused_ordering(152) 00:17:13.724 fused_ordering(153) 00:17:13.724 fused_ordering(154) 00:17:13.724 fused_ordering(155) 00:17:13.724 fused_ordering(156) 00:17:13.724 fused_ordering(157) 00:17:13.724 fused_ordering(158) 00:17:13.724 fused_ordering(159) 00:17:13.724 fused_ordering(160) 00:17:13.724 fused_ordering(161) 00:17:13.724 fused_ordering(162) 00:17:13.724 fused_ordering(163) 00:17:13.724 fused_ordering(164) 00:17:13.724 fused_ordering(165) 00:17:13.724 fused_ordering(166) 00:17:13.724 fused_ordering(167) 00:17:13.724 fused_ordering(168) 00:17:13.724 fused_ordering(169) 00:17:13.724 fused_ordering(170) 00:17:13.724 fused_ordering(171) 00:17:13.724 fused_ordering(172) 00:17:13.724 fused_ordering(173) 00:17:13.724 fused_ordering(174) 00:17:13.724 fused_ordering(175) 00:17:13.724 fused_ordering(176) 00:17:13.724 fused_ordering(177) 00:17:13.724 fused_ordering(178) 00:17:13.724 fused_ordering(179) 00:17:13.724 fused_ordering(180) 00:17:13.724 fused_ordering(181) 00:17:13.724 fused_ordering(182) 00:17:13.724 fused_ordering(183) 00:17:13.724 fused_ordering(184) 00:17:13.724 fused_ordering(185) 00:17:13.724 fused_ordering(186) 00:17:13.724 fused_ordering(187) 00:17:13.724 fused_ordering(188) 00:17:13.724 fused_ordering(189) 00:17:13.724 fused_ordering(190) 00:17:13.724 fused_ordering(191) 00:17:13.724 fused_ordering(192) 00:17:13.724 fused_ordering(193) 00:17:13.724 fused_ordering(194) 00:17:13.724 fused_ordering(195) 00:17:13.724 fused_ordering(196) 00:17:13.724 fused_ordering(197) 00:17:13.724 fused_ordering(198) 00:17:13.724 fused_ordering(199) 00:17:13.724 fused_ordering(200) 00:17:13.724 fused_ordering(201) 00:17:13.724 fused_ordering(202) 00:17:13.724 fused_ordering(203) 00:17:13.724 fused_ordering(204) 00:17:13.724 fused_ordering(205) 00:17:13.983 fused_ordering(206) 00:17:13.983 fused_ordering(207) 00:17:13.983 fused_ordering(208) 00:17:13.983 fused_ordering(209) 00:17:13.983 fused_ordering(210) 00:17:13.983 fused_ordering(211) 00:17:13.983 fused_ordering(212) 00:17:13.983 fused_ordering(213) 00:17:13.983 fused_ordering(214) 00:17:13.983 fused_ordering(215) 00:17:13.983 fused_ordering(216) 00:17:13.983 fused_ordering(217) 00:17:13.983 fused_ordering(218) 00:17:13.983 fused_ordering(219) 00:17:13.983 fused_ordering(220) 00:17:13.983 fused_ordering(221) 00:17:13.983 fused_ordering(222) 00:17:13.983 fused_ordering(223) 00:17:13.983 fused_ordering(224) 00:17:13.983 fused_ordering(225) 00:17:13.983 fused_ordering(226) 00:17:13.983 fused_ordering(227) 00:17:13.983 fused_ordering(228) 00:17:13.983 fused_ordering(229) 00:17:13.983 fused_ordering(230) 00:17:13.983 fused_ordering(231) 00:17:13.983 fused_ordering(232) 00:17:13.983 fused_ordering(233) 00:17:13.983 fused_ordering(234) 00:17:13.983 fused_ordering(235) 00:17:13.983 fused_ordering(236) 00:17:13.983 fused_ordering(237) 00:17:13.983 fused_ordering(238) 00:17:13.983 fused_ordering(239) 00:17:13.983 fused_ordering(240) 00:17:13.983 fused_ordering(241) 00:17:13.983 fused_ordering(242) 00:17:13.983 fused_ordering(243) 00:17:13.983 fused_ordering(244) 00:17:13.983 fused_ordering(245) 00:17:13.983 fused_ordering(246) 00:17:13.983 fused_ordering(247) 00:17:13.983 fused_ordering(248) 00:17:13.983 fused_ordering(249) 00:17:13.983 fused_ordering(250) 00:17:13.983 fused_ordering(251) 00:17:13.983 fused_ordering(252) 00:17:13.983 fused_ordering(253) 00:17:13.983 fused_ordering(254) 00:17:13.983 fused_ordering(255) 00:17:13.983 fused_ordering(256) 00:17:13.983 fused_ordering(257) 00:17:13.983 fused_ordering(258) 00:17:13.983 fused_ordering(259) 00:17:13.983 fused_ordering(260) 00:17:13.983 fused_ordering(261) 00:17:13.983 fused_ordering(262) 00:17:13.983 fused_ordering(263) 00:17:13.983 fused_ordering(264) 00:17:13.983 fused_ordering(265) 00:17:13.983 fused_ordering(266) 00:17:13.984 fused_ordering(267) 00:17:13.984 fused_ordering(268) 00:17:13.984 fused_ordering(269) 00:17:13.984 fused_ordering(270) 00:17:13.984 fused_ordering(271) 00:17:13.984 fused_ordering(272) 00:17:13.984 fused_ordering(273) 00:17:13.984 fused_ordering(274) 00:17:13.984 fused_ordering(275) 00:17:13.984 fused_ordering(276) 00:17:13.984 fused_ordering(277) 00:17:13.984 fused_ordering(278) 00:17:13.984 fused_ordering(279) 00:17:13.984 fused_ordering(280) 00:17:13.984 fused_ordering(281) 00:17:13.984 fused_ordering(282) 00:17:13.984 fused_ordering(283) 00:17:13.984 fused_ordering(284) 00:17:13.984 fused_ordering(285) 00:17:13.984 fused_ordering(286) 00:17:13.984 fused_ordering(287) 00:17:13.984 fused_ordering(288) 00:17:13.984 fused_ordering(289) 00:17:13.984 fused_ordering(290) 00:17:13.984 fused_ordering(291) 00:17:13.984 fused_ordering(292) 00:17:13.984 fused_ordering(293) 00:17:13.984 fused_ordering(294) 00:17:13.984 fused_ordering(295) 00:17:13.984 fused_ordering(296) 00:17:13.984 fused_ordering(297) 00:17:13.984 fused_ordering(298) 00:17:13.984 fused_ordering(299) 00:17:13.984 fused_ordering(300) 00:17:13.984 fused_ordering(301) 00:17:13.984 fused_ordering(302) 00:17:13.984 fused_ordering(303) 00:17:13.984 fused_ordering(304) 00:17:13.984 fused_ordering(305) 00:17:13.984 fused_ordering(306) 00:17:13.984 fused_ordering(307) 00:17:13.984 fused_ordering(308) 00:17:13.984 fused_ordering(309) 00:17:13.984 fused_ordering(310) 00:17:13.984 fused_ordering(311) 00:17:13.984 fused_ordering(312) 00:17:13.984 fused_ordering(313) 00:17:13.984 fused_ordering(314) 00:17:13.984 fused_ordering(315) 00:17:13.984 fused_ordering(316) 00:17:13.984 fused_ordering(317) 00:17:13.984 fused_ordering(318) 00:17:13.984 fused_ordering(319) 00:17:13.984 fused_ordering(320) 00:17:13.984 fused_ordering(321) 00:17:13.984 fused_ordering(322) 00:17:13.984 fused_ordering(323) 00:17:13.984 fused_ordering(324) 00:17:13.984 fused_ordering(325) 00:17:13.984 fused_ordering(326) 00:17:13.984 fused_ordering(327) 00:17:13.984 fused_ordering(328) 00:17:13.984 fused_ordering(329) 00:17:13.984 fused_ordering(330) 00:17:13.984 fused_ordering(331) 00:17:13.984 fused_ordering(332) 00:17:13.984 fused_ordering(333) 00:17:13.984 fused_ordering(334) 00:17:13.984 fused_ordering(335) 00:17:13.984 fused_ordering(336) 00:17:13.984 fused_ordering(337) 00:17:13.984 fused_ordering(338) 00:17:13.984 fused_ordering(339) 00:17:13.984 fused_ordering(340) 00:17:13.984 fused_ordering(341) 00:17:13.984 fused_ordering(342) 00:17:13.984 fused_ordering(343) 00:17:13.984 fused_ordering(344) 00:17:13.984 fused_ordering(345) 00:17:13.984 fused_ordering(346) 00:17:13.984 fused_ordering(347) 00:17:13.984 fused_ordering(348) 00:17:13.984 fused_ordering(349) 00:17:13.984 fused_ordering(350) 00:17:13.984 fused_ordering(351) 00:17:13.984 fused_ordering(352) 00:17:13.984 fused_ordering(353) 00:17:13.984 fused_ordering(354) 00:17:13.984 fused_ordering(355) 00:17:13.984 fused_ordering(356) 00:17:13.984 fused_ordering(357) 00:17:13.984 fused_ordering(358) 00:17:13.984 fused_ordering(359) 00:17:13.984 fused_ordering(360) 00:17:13.984 fused_ordering(361) 00:17:13.984 fused_ordering(362) 00:17:13.984 fused_ordering(363) 00:17:13.984 fused_ordering(364) 00:17:13.984 fused_ordering(365) 00:17:13.984 fused_ordering(366) 00:17:13.984 fused_ordering(367) 00:17:13.984 fused_ordering(368) 00:17:13.984 fused_ordering(369) 00:17:13.984 fused_ordering(370) 00:17:13.984 fused_ordering(371) 00:17:13.984 fused_ordering(372) 00:17:13.984 fused_ordering(373) 00:17:13.984 fused_ordering(374) 00:17:13.984 fused_ordering(375) 00:17:13.984 fused_ordering(376) 00:17:13.984 fused_ordering(377) 00:17:13.984 fused_ordering(378) 00:17:13.984 fused_ordering(379) 00:17:13.984 fused_ordering(380) 00:17:13.984 fused_ordering(381) 00:17:13.984 fused_ordering(382) 00:17:13.984 fused_ordering(383) 00:17:13.984 fused_ordering(384) 00:17:13.984 fused_ordering(385) 00:17:13.984 fused_ordering(386) 00:17:13.984 fused_ordering(387) 00:17:13.984 fused_ordering(388) 00:17:13.984 fused_ordering(389) 00:17:13.984 fused_ordering(390) 00:17:13.984 fused_ordering(391) 00:17:13.984 fused_ordering(392) 00:17:13.984 fused_ordering(393) 00:17:13.984 fused_ordering(394) 00:17:13.984 fused_ordering(395) 00:17:13.984 fused_ordering(396) 00:17:13.984 fused_ordering(397) 00:17:13.984 fused_ordering(398) 00:17:13.984 fused_ordering(399) 00:17:13.984 fused_ordering(400) 00:17:13.984 fused_ordering(401) 00:17:13.984 fused_ordering(402) 00:17:13.984 fused_ordering(403) 00:17:13.984 fused_ordering(404) 00:17:13.984 fused_ordering(405) 00:17:13.984 fused_ordering(406) 00:17:13.984 fused_ordering(407) 00:17:13.984 fused_ordering(408) 00:17:13.984 fused_ordering(409) 00:17:13.984 fused_ordering(410) 00:17:14.550 fused_ordering(411) 00:17:14.550 fused_ordering(412) 00:17:14.550 fused_ordering(413) 00:17:14.550 fused_ordering(414) 00:17:14.550 fused_ordering(415) 00:17:14.550 fused_ordering(416) 00:17:14.550 fused_ordering(417) 00:17:14.550 fused_ordering(418) 00:17:14.550 fused_ordering(419) 00:17:14.550 fused_ordering(420) 00:17:14.550 fused_ordering(421) 00:17:14.550 fused_ordering(422) 00:17:14.550 fused_ordering(423) 00:17:14.550 fused_ordering(424) 00:17:14.550 fused_ordering(425) 00:17:14.550 fused_ordering(426) 00:17:14.550 fused_ordering(427) 00:17:14.550 fused_ordering(428) 00:17:14.550 fused_ordering(429) 00:17:14.550 fused_ordering(430) 00:17:14.550 fused_ordering(431) 00:17:14.550 fused_ordering(432) 00:17:14.550 fused_ordering(433) 00:17:14.550 fused_ordering(434) 00:17:14.550 fused_ordering(435) 00:17:14.550 fused_ordering(436) 00:17:14.550 fused_ordering(437) 00:17:14.550 fused_ordering(438) 00:17:14.550 fused_ordering(439) 00:17:14.550 fused_ordering(440) 00:17:14.550 fused_ordering(441) 00:17:14.550 fused_ordering(442) 00:17:14.550 fused_ordering(443) 00:17:14.550 fused_ordering(444) 00:17:14.550 fused_ordering(445) 00:17:14.550 fused_ordering(446) 00:17:14.550 fused_ordering(447) 00:17:14.550 fused_ordering(448) 00:17:14.550 fused_ordering(449) 00:17:14.550 fused_ordering(450) 00:17:14.550 fused_ordering(451) 00:17:14.551 fused_ordering(452) 00:17:14.551 fused_ordering(453) 00:17:14.551 fused_ordering(454) 00:17:14.551 fused_ordering(455) 00:17:14.551 fused_ordering(456) 00:17:14.551 fused_ordering(457) 00:17:14.551 fused_ordering(458) 00:17:14.551 fused_ordering(459) 00:17:14.551 fused_ordering(460) 00:17:14.551 fused_ordering(461) 00:17:14.551 fused_ordering(462) 00:17:14.551 fused_ordering(463) 00:17:14.551 fused_ordering(464) 00:17:14.551 fused_ordering(465) 00:17:14.551 fused_ordering(466) 00:17:14.551 fused_ordering(467) 00:17:14.551 fused_ordering(468) 00:17:14.551 fused_ordering(469) 00:17:14.551 fused_ordering(470) 00:17:14.551 fused_ordering(471) 00:17:14.551 fused_ordering(472) 00:17:14.551 fused_ordering(473) 00:17:14.551 fused_ordering(474) 00:17:14.551 fused_ordering(475) 00:17:14.551 fused_ordering(476) 00:17:14.551 fused_ordering(477) 00:17:14.551 fused_ordering(478) 00:17:14.551 fused_ordering(479) 00:17:14.551 fused_ordering(480) 00:17:14.551 fused_ordering(481) 00:17:14.551 fused_ordering(482) 00:17:14.551 fused_ordering(483) 00:17:14.551 fused_ordering(484) 00:17:14.551 fused_ordering(485) 00:17:14.551 fused_ordering(486) 00:17:14.551 fused_ordering(487) 00:17:14.551 fused_ordering(488) 00:17:14.551 fused_ordering(489) 00:17:14.551 fused_ordering(490) 00:17:14.551 fused_ordering(491) 00:17:14.551 fused_ordering(492) 00:17:14.551 fused_ordering(493) 00:17:14.551 fused_ordering(494) 00:17:14.551 fused_ordering(495) 00:17:14.551 fused_ordering(496) 00:17:14.551 fused_ordering(497) 00:17:14.551 fused_ordering(498) 00:17:14.551 fused_ordering(499) 00:17:14.551 fused_ordering(500) 00:17:14.551 fused_ordering(501) 00:17:14.551 fused_ordering(502) 00:17:14.551 fused_ordering(503) 00:17:14.551 fused_ordering(504) 00:17:14.551 fused_ordering(505) 00:17:14.551 fused_ordering(506) 00:17:14.551 fused_ordering(507) 00:17:14.551 fused_ordering(508) 00:17:14.551 fused_ordering(509) 00:17:14.551 fused_ordering(510) 00:17:14.551 fused_ordering(511) 00:17:14.551 fused_ordering(512) 00:17:14.551 fused_ordering(513) 00:17:14.551 fused_ordering(514) 00:17:14.551 fused_ordering(515) 00:17:14.551 fused_ordering(516) 00:17:14.551 fused_ordering(517) 00:17:14.551 fused_ordering(518) 00:17:14.551 fused_ordering(519) 00:17:14.551 fused_ordering(520) 00:17:14.551 fused_ordering(521) 00:17:14.551 fused_ordering(522) 00:17:14.551 fused_ordering(523) 00:17:14.551 fused_ordering(524) 00:17:14.551 fused_ordering(525) 00:17:14.551 fused_ordering(526) 00:17:14.551 fused_ordering(527) 00:17:14.551 fused_ordering(528) 00:17:14.551 fused_ordering(529) 00:17:14.551 fused_ordering(530) 00:17:14.551 fused_ordering(531) 00:17:14.551 fused_ordering(532) 00:17:14.551 fused_ordering(533) 00:17:14.551 fused_ordering(534) 00:17:14.551 fused_ordering(535) 00:17:14.551 fused_ordering(536) 00:17:14.551 fused_ordering(537) 00:17:14.551 fused_ordering(538) 00:17:14.551 fused_ordering(539) 00:17:14.551 fused_ordering(540) 00:17:14.551 fused_ordering(541) 00:17:14.551 fused_ordering(542) 00:17:14.551 fused_ordering(543) 00:17:14.551 fused_ordering(544) 00:17:14.551 fused_ordering(545) 00:17:14.551 fused_ordering(546) 00:17:14.551 fused_ordering(547) 00:17:14.551 fused_ordering(548) 00:17:14.551 fused_ordering(549) 00:17:14.551 fused_ordering(550) 00:17:14.551 fused_ordering(551) 00:17:14.551 fused_ordering(552) 00:17:14.551 fused_ordering(553) 00:17:14.551 fused_ordering(554) 00:17:14.551 fused_ordering(555) 00:17:14.551 fused_ordering(556) 00:17:14.551 fused_ordering(557) 00:17:14.551 fused_ordering(558) 00:17:14.551 fused_ordering(559) 00:17:14.551 fused_ordering(560) 00:17:14.551 fused_ordering(561) 00:17:14.551 fused_ordering(562) 00:17:14.551 fused_ordering(563) 00:17:14.551 fused_ordering(564) 00:17:14.551 fused_ordering(565) 00:17:14.551 fused_ordering(566) 00:17:14.551 fused_ordering(567) 00:17:14.551 fused_ordering(568) 00:17:14.551 fused_ordering(569) 00:17:14.551 fused_ordering(570) 00:17:14.551 fused_ordering(571) 00:17:14.551 fused_ordering(572) 00:17:14.551 fused_ordering(573) 00:17:14.551 fused_ordering(574) 00:17:14.551 fused_ordering(575) 00:17:14.551 fused_ordering(576) 00:17:14.551 fused_ordering(577) 00:17:14.551 fused_ordering(578) 00:17:14.551 fused_ordering(579) 00:17:14.551 fused_ordering(580) 00:17:14.551 fused_ordering(581) 00:17:14.551 fused_ordering(582) 00:17:14.551 fused_ordering(583) 00:17:14.551 fused_ordering(584) 00:17:14.551 fused_ordering(585) 00:17:14.551 fused_ordering(586) 00:17:14.551 fused_ordering(587) 00:17:14.551 fused_ordering(588) 00:17:14.551 fused_ordering(589) 00:17:14.551 fused_ordering(590) 00:17:14.551 fused_ordering(591) 00:17:14.551 fused_ordering(592) 00:17:14.551 fused_ordering(593) 00:17:14.551 fused_ordering(594) 00:17:14.551 fused_ordering(595) 00:17:14.551 fused_ordering(596) 00:17:14.551 fused_ordering(597) 00:17:14.551 fused_ordering(598) 00:17:14.551 fused_ordering(599) 00:17:14.551 fused_ordering(600) 00:17:14.551 fused_ordering(601) 00:17:14.551 fused_ordering(602) 00:17:14.551 fused_ordering(603) 00:17:14.551 fused_ordering(604) 00:17:14.551 fused_ordering(605) 00:17:14.551 fused_ordering(606) 00:17:14.551 fused_ordering(607) 00:17:14.551 fused_ordering(608) 00:17:14.551 fused_ordering(609) 00:17:14.551 fused_ordering(610) 00:17:14.551 fused_ordering(611) 00:17:14.551 fused_ordering(612) 00:17:14.551 fused_ordering(613) 00:17:14.551 fused_ordering(614) 00:17:14.551 fused_ordering(615) 00:17:15.485 fused_ordering(616) 00:17:15.485 fused_ordering(617) 00:17:15.485 fused_ordering(618) 00:17:15.485 fused_ordering(619) 00:17:15.485 fused_ordering(620) 00:17:15.485 fused_ordering(621) 00:17:15.485 fused_ordering(622) 00:17:15.485 fused_ordering(623) 00:17:15.485 fused_ordering(624) 00:17:15.485 fused_ordering(625) 00:17:15.485 fused_ordering(626) 00:17:15.485 fused_ordering(627) 00:17:15.485 fused_ordering(628) 00:17:15.485 fused_ordering(629) 00:17:15.485 fused_ordering(630) 00:17:15.485 fused_ordering(631) 00:17:15.485 fused_ordering(632) 00:17:15.485 fused_ordering(633) 00:17:15.485 fused_ordering(634) 00:17:15.485 fused_ordering(635) 00:17:15.485 fused_ordering(636) 00:17:15.485 fused_ordering(637) 00:17:15.485 fused_ordering(638) 00:17:15.485 fused_ordering(639) 00:17:15.485 fused_ordering(640) 00:17:15.485 fused_ordering(641) 00:17:15.485 fused_ordering(642) 00:17:15.485 fused_ordering(643) 00:17:15.485 fused_ordering(644) 00:17:15.485 fused_ordering(645) 00:17:15.485 fused_ordering(646) 00:17:15.485 fused_ordering(647) 00:17:15.485 fused_ordering(648) 00:17:15.485 fused_ordering(649) 00:17:15.485 fused_ordering(650) 00:17:15.485 fused_ordering(651) 00:17:15.485 fused_ordering(652) 00:17:15.485 fused_ordering(653) 00:17:15.485 fused_ordering(654) 00:17:15.485 fused_ordering(655) 00:17:15.485 fused_ordering(656) 00:17:15.485 fused_ordering(657) 00:17:15.485 fused_ordering(658) 00:17:15.485 fused_ordering(659) 00:17:15.485 fused_ordering(660) 00:17:15.485 fused_ordering(661) 00:17:15.485 fused_ordering(662) 00:17:15.485 fused_ordering(663) 00:17:15.485 fused_ordering(664) 00:17:15.485 fused_ordering(665) 00:17:15.485 fused_ordering(666) 00:17:15.485 fused_ordering(667) 00:17:15.485 fused_ordering(668) 00:17:15.485 fused_ordering(669) 00:17:15.485 fused_ordering(670) 00:17:15.485 fused_ordering(671) 00:17:15.485 fused_ordering(672) 00:17:15.485 fused_ordering(673) 00:17:15.485 fused_ordering(674) 00:17:15.485 fused_ordering(675) 00:17:15.485 fused_ordering(676) 00:17:15.485 fused_ordering(677) 00:17:15.485 fused_ordering(678) 00:17:15.485 fused_ordering(679) 00:17:15.485 fused_ordering(680) 00:17:15.485 fused_ordering(681) 00:17:15.485 fused_ordering(682) 00:17:15.485 fused_ordering(683) 00:17:15.485 fused_ordering(684) 00:17:15.485 fused_ordering(685) 00:17:15.485 fused_ordering(686) 00:17:15.485 fused_ordering(687) 00:17:15.485 fused_ordering(688) 00:17:15.485 fused_ordering(689) 00:17:15.485 fused_ordering(690) 00:17:15.485 fused_ordering(691) 00:17:15.485 fused_ordering(692) 00:17:15.485 fused_ordering(693) 00:17:15.485 fused_ordering(694) 00:17:15.485 fused_ordering(695) 00:17:15.485 fused_ordering(696) 00:17:15.485 fused_ordering(697) 00:17:15.485 fused_ordering(698) 00:17:15.485 fused_ordering(699) 00:17:15.485 fused_ordering(700) 00:17:15.485 fused_ordering(701) 00:17:15.485 fused_ordering(702) 00:17:15.485 fused_ordering(703) 00:17:15.485 fused_ordering(704) 00:17:15.485 fused_ordering(705) 00:17:15.485 fused_ordering(706) 00:17:15.485 fused_ordering(707) 00:17:15.485 fused_ordering(708) 00:17:15.485 fused_ordering(709) 00:17:15.485 fused_ordering(710) 00:17:15.485 fused_ordering(711) 00:17:15.485 fused_ordering(712) 00:17:15.485 fused_ordering(713) 00:17:15.485 fused_ordering(714) 00:17:15.485 fused_ordering(715) 00:17:15.485 fused_ordering(716) 00:17:15.485 fused_ordering(717) 00:17:15.485 fused_ordering(718) 00:17:15.485 fused_ordering(719) 00:17:15.485 fused_ordering(720) 00:17:15.485 fused_ordering(721) 00:17:15.485 fused_ordering(722) 00:17:15.485 fused_ordering(723) 00:17:15.485 fused_ordering(724) 00:17:15.485 fused_ordering(725) 00:17:15.485 fused_ordering(726) 00:17:15.485 fused_ordering(727) 00:17:15.485 fused_ordering(728) 00:17:15.485 fused_ordering(729) 00:17:15.485 fused_ordering(730) 00:17:15.485 fused_ordering(731) 00:17:15.485 fused_ordering(732) 00:17:15.485 fused_ordering(733) 00:17:15.485 fused_ordering(734) 00:17:15.485 fused_ordering(735) 00:17:15.485 fused_ordering(736) 00:17:15.485 fused_ordering(737) 00:17:15.485 fused_ordering(738) 00:17:15.485 fused_ordering(739) 00:17:15.485 fused_ordering(740) 00:17:15.485 fused_ordering(741) 00:17:15.485 fused_ordering(742) 00:17:15.485 fused_ordering(743) 00:17:15.485 fused_ordering(744) 00:17:15.485 fused_ordering(745) 00:17:15.485 fused_ordering(746) 00:17:15.485 fused_ordering(747) 00:17:15.485 fused_ordering(748) 00:17:15.485 fused_ordering(749) 00:17:15.485 fused_ordering(750) 00:17:15.485 fused_ordering(751) 00:17:15.485 fused_ordering(752) 00:17:15.485 fused_ordering(753) 00:17:15.485 fused_ordering(754) 00:17:15.485 fused_ordering(755) 00:17:15.485 fused_ordering(756) 00:17:15.486 fused_ordering(757) 00:17:15.486 fused_ordering(758) 00:17:15.486 fused_ordering(759) 00:17:15.486 fused_ordering(760) 00:17:15.486 fused_ordering(761) 00:17:15.486 fused_ordering(762) 00:17:15.486 fused_ordering(763) 00:17:15.486 fused_ordering(764) 00:17:15.486 fused_ordering(765) 00:17:15.486 fused_ordering(766) 00:17:15.486 fused_ordering(767) 00:17:15.486 fused_ordering(768) 00:17:15.486 fused_ordering(769) 00:17:15.486 fused_ordering(770) 00:17:15.486 fused_ordering(771) 00:17:15.486 fused_ordering(772) 00:17:15.486 fused_ordering(773) 00:17:15.486 fused_ordering(774) 00:17:15.486 fused_ordering(775) 00:17:15.486 fused_ordering(776) 00:17:15.486 fused_ordering(777) 00:17:15.486 fused_ordering(778) 00:17:15.486 fused_ordering(779) 00:17:15.486 fused_ordering(780) 00:17:15.486 fused_ordering(781) 00:17:15.486 fused_ordering(782) 00:17:15.486 fused_ordering(783) 00:17:15.486 fused_ordering(784) 00:17:15.486 fused_ordering(785) 00:17:15.486 fused_ordering(786) 00:17:15.486 fused_ordering(787) 00:17:15.486 fused_ordering(788) 00:17:15.486 fused_ordering(789) 00:17:15.486 fused_ordering(790) 00:17:15.486 fused_ordering(791) 00:17:15.486 fused_ordering(792) 00:17:15.486 fused_ordering(793) 00:17:15.486 fused_ordering(794) 00:17:15.486 fused_ordering(795) 00:17:15.486 fused_ordering(796) 00:17:15.486 fused_ordering(797) 00:17:15.486 fused_ordering(798) 00:17:15.486 fused_ordering(799) 00:17:15.486 fused_ordering(800) 00:17:15.486 fused_ordering(801) 00:17:15.486 fused_ordering(802) 00:17:15.486 fused_ordering(803) 00:17:15.486 fused_ordering(804) 00:17:15.486 fused_ordering(805) 00:17:15.486 fused_ordering(806) 00:17:15.486 fused_ordering(807) 00:17:15.486 fused_ordering(808) 00:17:15.486 fused_ordering(809) 00:17:15.486 fused_ordering(810) 00:17:15.486 fused_ordering(811) 00:17:15.486 fused_ordering(812) 00:17:15.486 fused_ordering(813) 00:17:15.486 fused_ordering(814) 00:17:15.486 fused_ordering(815) 00:17:15.486 fused_ordering(816) 00:17:15.486 fused_ordering(817) 00:17:15.486 fused_ordering(818) 00:17:15.486 fused_ordering(819) 00:17:15.486 fused_ordering(820) 00:17:16.420 fused_ordering(821) 00:17:16.420 fused_ordering(822) 00:17:16.420 fused_ordering(823) 00:17:16.420 fused_ordering(824) 00:17:16.420 fused_ordering(825) 00:17:16.420 fused_ordering(826) 00:17:16.420 fused_ordering(827) 00:17:16.420 fused_ordering(828) 00:17:16.420 fused_ordering(829) 00:17:16.420 fused_ordering(830) 00:17:16.420 fused_ordering(831) 00:17:16.420 fused_ordering(832) 00:17:16.420 fused_ordering(833) 00:17:16.420 fused_ordering(834) 00:17:16.420 fused_ordering(835) 00:17:16.420 fused_ordering(836) 00:17:16.420 fused_ordering(837) 00:17:16.420 fused_ordering(838) 00:17:16.420 fused_ordering(839) 00:17:16.420 fused_ordering(840) 00:17:16.420 fused_ordering(841) 00:17:16.420 fused_ordering(842) 00:17:16.420 fused_ordering(843) 00:17:16.420 fused_ordering(844) 00:17:16.420 fused_ordering(845) 00:17:16.420 fused_ordering(846) 00:17:16.420 fused_ordering(847) 00:17:16.420 fused_ordering(848) 00:17:16.420 fused_ordering(849) 00:17:16.420 fused_ordering(850) 00:17:16.420 fused_ordering(851) 00:17:16.420 fused_ordering(852) 00:17:16.420 fused_ordering(853) 00:17:16.420 fused_ordering(854) 00:17:16.420 fused_ordering(855) 00:17:16.420 fused_ordering(856) 00:17:16.420 fused_ordering(857) 00:17:16.420 fused_ordering(858) 00:17:16.420 fused_ordering(859) 00:17:16.420 fused_ordering(860) 00:17:16.420 fused_ordering(861) 00:17:16.420 fused_ordering(862) 00:17:16.420 fused_ordering(863) 00:17:16.420 fused_ordering(864) 00:17:16.420 fused_ordering(865) 00:17:16.420 fused_ordering(866) 00:17:16.420 fused_ordering(867) 00:17:16.420 fused_ordering(868) 00:17:16.420 fused_ordering(869) 00:17:16.420 fused_ordering(870) 00:17:16.420 fused_ordering(871) 00:17:16.420 fused_ordering(872) 00:17:16.420 fused_ordering(873) 00:17:16.420 fused_ordering(874) 00:17:16.420 fused_ordering(875) 00:17:16.420 fused_ordering(876) 00:17:16.420 fused_ordering(877) 00:17:16.420 fused_ordering(878) 00:17:16.420 fused_ordering(879) 00:17:16.420 fused_ordering(880) 00:17:16.420 fused_ordering(881) 00:17:16.420 fused_ordering(882) 00:17:16.420 fused_ordering(883) 00:17:16.420 fused_ordering(884) 00:17:16.420 fused_ordering(885) 00:17:16.420 fused_ordering(886) 00:17:16.421 fused_ordering(887) 00:17:16.421 fused_ordering(888) 00:17:16.421 fused_ordering(889) 00:17:16.421 fused_ordering(890) 00:17:16.421 fused_ordering(891) 00:17:16.421 fused_ordering(892) 00:17:16.421 fused_ordering(893) 00:17:16.421 fused_ordering(894) 00:17:16.421 fused_ordering(895) 00:17:16.421 fused_ordering(896) 00:17:16.421 fused_ordering(897) 00:17:16.421 fused_ordering(898) 00:17:16.421 fused_ordering(899) 00:17:16.421 fused_ordering(900) 00:17:16.421 fused_ordering(901) 00:17:16.421 fused_ordering(902) 00:17:16.421 fused_ordering(903) 00:17:16.421 fused_ordering(904) 00:17:16.421 fused_ordering(905) 00:17:16.421 fused_ordering(906) 00:17:16.421 fused_ordering(907) 00:17:16.421 fused_ordering(908) 00:17:16.421 fused_ordering(909) 00:17:16.421 fused_ordering(910) 00:17:16.421 fused_ordering(911) 00:17:16.421 fused_ordering(912) 00:17:16.421 fused_ordering(913) 00:17:16.421 fused_ordering(914) 00:17:16.421 fused_ordering(915) 00:17:16.421 fused_ordering(916) 00:17:16.421 fused_ordering(917) 00:17:16.421 fused_ordering(918) 00:17:16.421 fused_ordering(919) 00:17:16.421 fused_ordering(920) 00:17:16.421 fused_ordering(921) 00:17:16.421 fused_ordering(922) 00:17:16.421 fused_ordering(923) 00:17:16.421 fused_ordering(924) 00:17:16.421 fused_ordering(925) 00:17:16.421 fused_ordering(926) 00:17:16.421 fused_ordering(927) 00:17:16.421 fused_ordering(928) 00:17:16.421 fused_ordering(929) 00:17:16.421 fused_ordering(930) 00:17:16.421 fused_ordering(931) 00:17:16.421 fused_ordering(932) 00:17:16.421 fused_ordering(933) 00:17:16.421 fused_ordering(934) 00:17:16.421 fused_ordering(935) 00:17:16.421 fused_ordering(936) 00:17:16.421 fused_ordering(937) 00:17:16.421 fused_ordering(938) 00:17:16.421 fused_ordering(939) 00:17:16.421 fused_ordering(940) 00:17:16.421 fused_ordering(941) 00:17:16.421 fused_ordering(942) 00:17:16.421 fused_ordering(943) 00:17:16.421 fused_ordering(944) 00:17:16.421 fused_ordering(945) 00:17:16.421 fused_ordering(946) 00:17:16.421 fused_ordering(947) 00:17:16.421 fused_ordering(948) 00:17:16.421 fused_ordering(949) 00:17:16.421 fused_ordering(950) 00:17:16.421 fused_ordering(951) 00:17:16.421 fused_ordering(952) 00:17:16.421 fused_ordering(953) 00:17:16.421 fused_ordering(954) 00:17:16.421 fused_ordering(955) 00:17:16.421 fused_ordering(956) 00:17:16.421 fused_ordering(957) 00:17:16.421 fused_ordering(958) 00:17:16.421 fused_ordering(959) 00:17:16.421 fused_ordering(960) 00:17:16.421 fused_ordering(961) 00:17:16.421 fused_ordering(962) 00:17:16.421 fused_ordering(963) 00:17:16.421 fused_ordering(964) 00:17:16.421 fused_ordering(965) 00:17:16.421 fused_ordering(966) 00:17:16.421 fused_ordering(967) 00:17:16.421 fused_ordering(968) 00:17:16.421 fused_ordering(969) 00:17:16.421 fused_ordering(970) 00:17:16.421 fused_ordering(971) 00:17:16.421 fused_ordering(972) 00:17:16.421 fused_ordering(973) 00:17:16.421 fused_ordering(974) 00:17:16.421 fused_ordering(975) 00:17:16.421 fused_ordering(976) 00:17:16.421 fused_ordering(977) 00:17:16.421 fused_ordering(978) 00:17:16.421 fused_ordering(979) 00:17:16.421 fused_ordering(980) 00:17:16.421 fused_ordering(981) 00:17:16.421 fused_ordering(982) 00:17:16.421 fused_ordering(983) 00:17:16.421 fused_ordering(984) 00:17:16.421 fused_ordering(985) 00:17:16.421 fused_ordering(986) 00:17:16.421 fused_ordering(987) 00:17:16.421 fused_ordering(988) 00:17:16.421 fused_ordering(989) 00:17:16.421 fused_ordering(990) 00:17:16.421 fused_ordering(991) 00:17:16.421 fused_ordering(992) 00:17:16.421 fused_ordering(993) 00:17:16.421 fused_ordering(994) 00:17:16.421 fused_ordering(995) 00:17:16.421 fused_ordering(996) 00:17:16.421 fused_ordering(997) 00:17:16.421 fused_ordering(998) 00:17:16.421 fused_ordering(999) 00:17:16.421 fused_ordering(1000) 00:17:16.421 fused_ordering(1001) 00:17:16.421 fused_ordering(1002) 00:17:16.421 fused_ordering(1003) 00:17:16.421 fused_ordering(1004) 00:17:16.421 fused_ordering(1005) 00:17:16.421 fused_ordering(1006) 00:17:16.421 fused_ordering(1007) 00:17:16.421 fused_ordering(1008) 00:17:16.421 fused_ordering(1009) 00:17:16.421 fused_ordering(1010) 00:17:16.421 fused_ordering(1011) 00:17:16.421 fused_ordering(1012) 00:17:16.421 fused_ordering(1013) 00:17:16.421 fused_ordering(1014) 00:17:16.421 fused_ordering(1015) 00:17:16.421 fused_ordering(1016) 00:17:16.421 fused_ordering(1017) 00:17:16.421 fused_ordering(1018) 00:17:16.421 fused_ordering(1019) 00:17:16.421 fused_ordering(1020) 00:17:16.421 fused_ordering(1021) 00:17:16.421 fused_ordering(1022) 00:17:16.421 fused_ordering(1023) 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.421 rmmod nvme_tcp 00:17:16.421 rmmod nvme_fabrics 00:17:16.421 rmmod nvme_keyring 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2951583 ']' 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2951583 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2951583 ']' 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2951583 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951583 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951583' 00:17:16.421 killing process with pid 2951583 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2951583 00:17:16.421 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2951583 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.796 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.702 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.702 00:17:19.702 real 0m10.593s 00:17:19.702 user 0m8.745s 00:17:19.702 sys 0m3.943s 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:19.703 ************************************ 00:17:19.703 END TEST nvmf_fused_ordering 00:17:19.703 ************************************ 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.703 ************************************ 00:17:19.703 START TEST nvmf_ns_masking 00:17:19.703 ************************************ 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:19.703 * Looking for test storage... 00:17:19.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.703 --rc genhtml_branch_coverage=1 00:17:19.703 --rc genhtml_function_coverage=1 00:17:19.703 --rc genhtml_legend=1 00:17:19.703 --rc geninfo_all_blocks=1 00:17:19.703 --rc geninfo_unexecuted_blocks=1 00:17:19.703 00:17:19.703 ' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.703 --rc genhtml_branch_coverage=1 00:17:19.703 --rc genhtml_function_coverage=1 00:17:19.703 --rc genhtml_legend=1 00:17:19.703 --rc geninfo_all_blocks=1 00:17:19.703 --rc geninfo_unexecuted_blocks=1 00:17:19.703 00:17:19.703 ' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.703 --rc genhtml_branch_coverage=1 00:17:19.703 --rc genhtml_function_coverage=1 00:17:19.703 --rc genhtml_legend=1 00:17:19.703 --rc geninfo_all_blocks=1 00:17:19.703 --rc geninfo_unexecuted_blocks=1 00:17:19.703 00:17:19.703 ' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.703 --rc genhtml_branch_coverage=1 00:17:19.703 --rc genhtml_function_coverage=1 00:17:19.703 --rc genhtml_legend=1 00:17:19.703 --rc geninfo_all_blocks=1 00:17:19.703 --rc geninfo_unexecuted_blocks=1 00:17:19.703 00:17:19.703 ' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.703 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:19.704 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c0331d68-5187-436c-bd84-da2aac540b0a 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d7b88c1c-8923-4ba9-95bb-a710cc4c04fa 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3938ced0-9b25-44ea-b8e0-1ff1a6a8e318 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.973 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:21.877 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:21.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:21.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:21.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:21.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:21.878 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.137 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.137 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.137 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:22.137 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:22.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:22.137 00:17:22.137 --- 10.0.0.2 ping statistics --- 00:17:22.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.137 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:22.137 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:17:22.138 00:17:22.138 --- 10.0.0.1 ping statistics --- 00:17:22.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.138 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2954284 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2954284 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2954284 ']' 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.138 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:22.138 [2024-11-17 09:17:27.070751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:22.138 [2024-11-17 09:17:27.070916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.396 [2024-11-17 09:17:27.221505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.396 [2024-11-17 09:17:27.357781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.396 [2024-11-17 09:17:27.357880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.396 [2024-11-17 09:17:27.357906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.396 [2024-11-17 09:17:27.357930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.396 [2024-11-17 09:17:27.357950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.396 [2024-11-17 09:17:27.359646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.336 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:23.623 [2024-11-17 09:17:28.380234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.623 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:23.623 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:23.623 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:23.882 Malloc1 00:17:23.882 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:24.140 Malloc2 00:17:24.397 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:24.653 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:24.912 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.171 [2024-11-17 09:17:29.995428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.171 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:25.171 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3938ced0-9b25-44ea-b8e0-1ff1a6a8e318 -a 10.0.0.2 -s 4420 -i 4 00:17:25.429 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:25.429 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.429 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.429 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:25.429 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:27.329 [ 0]:0x1 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=107190ee64f14d8cb164522ed60e6c7a 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 107190ee64f14d8cb164522ed60e6c7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.329 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:27.587 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:27.587 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.587 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:27.846 [ 0]:0x1 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=107190ee64f14d8cb164522ed60e6c7a 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 107190ee64f14d8cb164522ed60e6c7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:27.846 [ 1]:0x2 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.846 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.105 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3938ced0-9b25-44ea-b8e0-1ff1a6a8e318 -a 10.0.0.2 -s 4420 -i 4 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:28.671 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:31.201 [ 0]:0x2 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.201 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:31.202 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.202 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.202 [ 0]:0x1 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=107190ee64f14d8cb164522ed60e6c7a 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 107190ee64f14d8cb164522ed60e6c7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:31.202 [ 1]:0x2 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.202 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:31.768 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:31.769 [ 0]:0x2 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.769 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:32.027 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:32.027 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3938ced0-9b25-44ea-b8e0-1ff1a6a8e318 -a 10.0.0.2 -s 4420 -i 4 00:17:32.286 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:32.286 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:32.286 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.286 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:32.286 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:32.286 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.186 [ 0]:0x1 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=107190ee64f14d8cb164522ed60e6c7a 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 107190ee64f14d8cb164522ed60e6c7a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.186 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:34.444 [ 1]:0x2 00:17:34.444 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:34.444 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.444 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:34.444 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.444 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:34.702 [ 0]:0x2 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.702 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.703 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.703 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.703 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:34.703 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:34.961 [2024-11-17 09:17:39.946279] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:34.961 request: 00:17:34.961 { 00:17:34.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.961 "nsid": 2, 00:17:34.961 "host": "nqn.2016-06.io.spdk:host1", 00:17:34.961 "method": "nvmf_ns_remove_host", 00:17:34.961 "req_id": 1 00:17:34.961 } 00:17:34.961 Got JSON-RPC error response 00:17:34.961 response: 00:17:34.961 { 00:17:34.961 "code": -32602, 00:17:34.961 "message": "Invalid parameters" 00:17:34.961 } 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:34.961 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:35.219 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:35.219 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:35.219 [ 0]:0x2 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a47a45843f6e4ba89ba095478b869068 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a47a45843f6e4ba89ba095478b869068 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:35.219 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2955963 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2955963 /var/tmp/host.sock 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2955963 ']' 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:35.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.220 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:35.220 [2024-11-17 09:17:40.197322] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:35.220 [2024-11-17 09:17:40.197495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955963 ] 00:17:35.478 [2024-11-17 09:17:40.336262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.478 [2024-11-17 09:17:40.460228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.412 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.412 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:36.412 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:36.670 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:37.237 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c0331d68-5187-436c-bd84-da2aac540b0a 00:17:37.237 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:37.237 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C0331D685187436CBD84DA2AAC540B0A -i 00:17:37.237 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d7b88c1c-8923-4ba9-95bb-a710cc4c04fa 00:17:37.237 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:37.237 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D7B88C1C89234BA995BBA710CC4C04FA -i 00:17:37.803 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:37.803 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:38.370 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:38.370 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:38.628 nvme0n1 00:17:38.628 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:38.628 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:38.887 nvme1n2 00:17:38.887 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:38.887 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:38.887 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:38.887 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:38.887 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c0331d68-5187-436c-bd84-da2aac540b0a == \c\0\3\3\1\d\6\8\-\5\1\8\7\-\4\3\6\c\-\b\d\8\4\-\d\a\2\a\a\c\5\4\0\b\0\a ]] 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:39.454 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:39.712 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d7b88c1c-8923-4ba9-95bb-a710cc4c04fa == \d\7\b\8\8\c\1\c\-\8\9\2\3\-\4\b\a\9\-\9\5\b\b\-\a\7\1\0\c\c\4\c\0\4\f\a ]] 00:17:39.712 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:40.279 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c0331d68-5187-436c-bd84-da2aac540b0a 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C0331D685187436CBD84DA2AAC540B0A 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C0331D685187436CBD84DA2AAC540B0A 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.279 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.280 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.280 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.280 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.280 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:40.280 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C0331D685187436CBD84DA2AAC540B0A 00:17:40.538 [2024-11-17 09:17:45.524061] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:40.538 [2024-11-17 09:17:45.524128] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:40.538 [2024-11-17 09:17:45.524156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.538 request: 00:17:40.538 { 00:17:40.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.538 "namespace": { 00:17:40.538 "bdev_name": "invalid", 00:17:40.538 "nsid": 1, 00:17:40.538 "nguid": "C0331D685187436CBD84DA2AAC540B0A", 00:17:40.538 "no_auto_visible": false 00:17:40.538 }, 00:17:40.538 "method": "nvmf_subsystem_add_ns", 00:17:40.538 "req_id": 1 00:17:40.538 } 00:17:40.538 Got JSON-RPC error response 00:17:40.538 response: 00:17:40.538 { 00:17:40.538 "code": -32602, 00:17:40.538 "message": "Invalid parameters" 00:17:40.538 } 00:17:40.538 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:40.538 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.538 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.538 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.538 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c0331d68-5187-436c-bd84-da2aac540b0a 00:17:40.538 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:40.796 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C0331D685187436CBD84DA2AAC540B0A -i 00:17:41.054 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:42.953 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:42.953 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:42.953 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2955963 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2955963 ']' 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2955963 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2955963 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2955963' 00:17:43.212 killing process with pid 2955963 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2955963 00:17:43.212 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2955963 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.741 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.741 rmmod nvme_tcp 00:17:45.741 rmmod nvme_fabrics 00:17:45.741 rmmod nvme_keyring 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2954284 ']' 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2954284 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2954284 ']' 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2954284 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954284 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954284' 00:17:45.999 killing process with pid 2954284 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2954284 00:17:45.999 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2954284 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.375 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.914 00:17:49.914 real 0m29.864s 00:17:49.914 user 0m44.327s 00:17:49.914 sys 0m4.928s 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:49.914 ************************************ 00:17:49.914 END TEST nvmf_ns_masking 00:17:49.914 ************************************ 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.914 ************************************ 00:17:49.914 START TEST nvmf_nvme_cli 00:17:49.914 ************************************ 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:49.914 * Looking for test storage... 00:17:49.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.914 --rc genhtml_branch_coverage=1 00:17:49.914 --rc genhtml_function_coverage=1 00:17:49.914 --rc genhtml_legend=1 00:17:49.914 --rc geninfo_all_blocks=1 00:17:49.914 --rc geninfo_unexecuted_blocks=1 00:17:49.914 00:17:49.914 ' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.914 --rc genhtml_branch_coverage=1 00:17:49.914 --rc genhtml_function_coverage=1 00:17:49.914 --rc genhtml_legend=1 00:17:49.914 --rc geninfo_all_blocks=1 00:17:49.914 --rc geninfo_unexecuted_blocks=1 00:17:49.914 00:17:49.914 ' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.914 --rc genhtml_branch_coverage=1 00:17:49.914 --rc genhtml_function_coverage=1 00:17:49.914 --rc genhtml_legend=1 00:17:49.914 --rc geninfo_all_blocks=1 00:17:49.914 --rc geninfo_unexecuted_blocks=1 00:17:49.914 00:17:49.914 ' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.914 --rc genhtml_branch_coverage=1 00:17:49.914 --rc genhtml_function_coverage=1 00:17:49.914 --rc genhtml_legend=1 00:17:49.914 --rc geninfo_all_blocks=1 00:17:49.914 --rc geninfo_unexecuted_blocks=1 00:17:49.914 00:17:49.914 ' 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.914 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.915 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.818 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:17:51.819 00:17:51.819 --- 10.0.0.2 ping statistics --- 00:17:51.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.819 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:17:51.819 00:17:51.819 --- 10.0.0.1 ping statistics --- 00:17:51.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.819 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.819 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2959389 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2959389 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2959389 ']' 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.079 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:52.079 [2024-11-17 09:17:56.939637] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:52.079 [2024-11-17 09:17:56.939807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.338 [2024-11-17 09:17:57.090440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.338 [2024-11-17 09:17:57.235982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.338 [2024-11-17 09:17:57.236072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.338 [2024-11-17 09:17:57.236099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.338 [2024-11-17 09:17:57.236123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.338 [2024-11-17 09:17:57.236142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.338 [2024-11-17 09:17:57.239031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.338 [2024-11-17 09:17:57.239087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.338 [2024-11-17 09:17:57.239153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.338 [2024-11-17 09:17:57.239158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 [2024-11-17 09:17:57.966907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 Malloc0 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 Malloc1 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 [2024-11-17 09:17:58.165703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.273 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:53.532 00:17:53.532 Discovery Log Number of Records 2, Generation counter 2 00:17:53.532 =====Discovery Log Entry 0====== 00:17:53.532 trtype: tcp 00:17:53.532 adrfam: ipv4 00:17:53.532 subtype: current discovery subsystem 00:17:53.532 treq: not required 00:17:53.532 portid: 0 00:17:53.532 trsvcid: 4420 00:17:53.532 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:53.532 traddr: 10.0.0.2 00:17:53.532 eflags: explicit discovery connections, duplicate discovery information 00:17:53.532 sectype: none 00:17:53.532 =====Discovery Log Entry 1====== 00:17:53.532 trtype: tcp 00:17:53.532 adrfam: ipv4 00:17:53.532 subtype: nvme subsystem 00:17:53.532 treq: not required 00:17:53.532 portid: 0 00:17:53.532 trsvcid: 4420 00:17:53.532 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:53.532 traddr: 10.0.0.2 00:17:53.532 eflags: none 00:17:53.532 sectype: none 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:53.532 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:54.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:54.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:54.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:54.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:56.627 /dev/nvme0n2 ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:56.627 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.886 rmmod nvme_tcp 00:17:56.886 rmmod nvme_fabrics 00:17:56.886 rmmod nvme_keyring 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2959389 ']' 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2959389 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2959389 ']' 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2959389 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959389 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959389' 00:17:56.886 killing process with pid 2959389 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2959389 00:17:56.886 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2959389 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.787 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:00.746 00:18:00.746 real 0m10.866s 00:18:00.746 user 0m23.804s 00:18:00.746 sys 0m2.520s 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.746 ************************************ 00:18:00.746 END TEST nvmf_nvme_cli 00:18:00.746 ************************************ 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.746 ************************************ 00:18:00.746 START TEST nvmf_auth_target 00:18:00.746 ************************************ 00:18:00.746 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:00.746 * Looking for test storage... 00:18:00.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.747 --rc genhtml_branch_coverage=1 00:18:00.747 --rc genhtml_function_coverage=1 00:18:00.747 --rc genhtml_legend=1 00:18:00.747 --rc geninfo_all_blocks=1 00:18:00.747 --rc geninfo_unexecuted_blocks=1 00:18:00.747 00:18:00.747 ' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.747 --rc genhtml_branch_coverage=1 00:18:00.747 --rc genhtml_function_coverage=1 00:18:00.747 --rc genhtml_legend=1 00:18:00.747 --rc geninfo_all_blocks=1 00:18:00.747 --rc geninfo_unexecuted_blocks=1 00:18:00.747 00:18:00.747 ' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.747 --rc genhtml_branch_coverage=1 00:18:00.747 --rc genhtml_function_coverage=1 00:18:00.747 --rc genhtml_legend=1 00:18:00.747 --rc geninfo_all_blocks=1 00:18:00.747 --rc geninfo_unexecuted_blocks=1 00:18:00.747 00:18:00.747 ' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.747 --rc genhtml_branch_coverage=1 00:18:00.747 --rc genhtml_function_coverage=1 00:18:00.747 --rc genhtml_legend=1 00:18:00.747 --rc geninfo_all_blocks=1 00:18:00.747 --rc geninfo_unexecuted_blocks=1 00:18:00.747 00:18:00.747 ' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.747 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.748 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:02.650 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:02.650 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:02.650 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:02.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:02.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:02.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:18:02.651 00:18:02.651 --- 10.0.0.2 ping statistics --- 00:18:02.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.651 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:18:02.651 00:18:02.651 --- 10.0.0.1 ping statistics --- 00:18:02.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.651 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2962308 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2962308 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962308 ']' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.651 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2962711 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=43ad910e53997bf61a1dd708f030a8fa1f48c812d0e7eb82 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QNM 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 43ad910e53997bf61a1dd708f030a8fa1f48c812d0e7eb82 0 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 43ad910e53997bf61a1dd708f030a8fa1f48c812d0e7eb82 0 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=43ad910e53997bf61a1dd708f030a8fa1f48c812d0e7eb82 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QNM 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QNM 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.QNM 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6d06ea3f2b6d5d76d3a26968c1de159e4d8d56efd1861dd181227e93f4cb3f05 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3ps 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6d06ea3f2b6d5d76d3a26968c1de159e4d8d56efd1861dd181227e93f4cb3f05 3 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6d06ea3f2b6d5d76d3a26968c1de159e4d8d56efd1861dd181227e93f4cb3f05 3 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6d06ea3f2b6d5d76d3a26968c1de159e4d8d56efd1861dd181227e93f4cb3f05 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3ps 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3ps 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.3ps 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=946428840e0ab1b88dd6253672e10dd8 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:04.027 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UkQ 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 946428840e0ab1b88dd6253672e10dd8 1 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 946428840e0ab1b88dd6253672e10dd8 1 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=946428840e0ab1b88dd6253672e10dd8 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UkQ 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UkQ 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.UkQ 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fca747362dee2039c0e1a8be7b272681046c159c7ba20a9c 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ykB 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fca747362dee2039c0e1a8be7b272681046c159c7ba20a9c 2 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fca747362dee2039c0e1a8be7b272681046c159c7ba20a9c 2 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fca747362dee2039c0e1a8be7b272681046c159c7ba20a9c 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ykB 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ykB 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ykB 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ae4430d2b95594991053c485e75af470410698dc0e553dd9 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Xj6 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ae4430d2b95594991053c485e75af470410698dc0e553dd9 2 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ae4430d2b95594991053c485e75af470410698dc0e553dd9 2 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ae4430d2b95594991053c485e75af470410698dc0e553dd9 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Xj6 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Xj6 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Xj6 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1002a5717242c0166ee8422ec24f5286 00:18:04.028 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Eh3 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1002a5717242c0166ee8422ec24f5286 1 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1002a5717242c0166ee8422ec24f5286 1 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1002a5717242c0166ee8422ec24f5286 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:04.028 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Eh3 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Eh3 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Eh3 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f7c1f84d8b8b1ed06e66eb7cc5ff4ba6108b04e62f4b9d9d8c2156fcd73b415 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Fig 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f7c1f84d8b8b1ed06e66eb7cc5ff4ba6108b04e62f4b9d9d8c2156fcd73b415 3 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f7c1f84d8b8b1ed06e66eb7cc5ff4ba6108b04e62f4b9d9d8c2156fcd73b415 3 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f7c1f84d8b8b1ed06e66eb7cc5ff4ba6108b04e62f4b9d9d8c2156fcd73b415 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Fig 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Fig 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Fig 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2962308 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962308 ']' 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.286 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2962711 /var/tmp/host.sock 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2962711 ']' 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:04.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.545 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QNM 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.QNM 00:18:05.113 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.QNM 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.3ps ]] 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3ps 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3ps 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3ps 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UkQ 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UkQ 00:18:05.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UkQ 00:18:05.938 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ykB ]] 00:18:05.938 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ykB 00:18:05.938 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.938 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.938 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.939 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ykB 00:18:05.939 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ykB 00:18:06.197 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:06.197 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj6 00:18:06.197 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.197 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.455 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.455 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj6 00:18:06.455 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj6 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Eh3 ]] 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eh3 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eh3 00:18:06.713 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eh3 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Fig 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Fig 00:18:06.971 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Fig 00:18:07.229 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:07.229 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:07.229 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.229 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.229 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.229 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.487 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.745 00:18:07.745 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.745 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.745 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.004 { 00:18:08.004 "cntlid": 1, 00:18:08.004 "qid": 0, 00:18:08.004 "state": "enabled", 00:18:08.004 "thread": "nvmf_tgt_poll_group_000", 00:18:08.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:08.004 "listen_address": { 00:18:08.004 "trtype": "TCP", 00:18:08.004 "adrfam": "IPv4", 00:18:08.004 "traddr": "10.0.0.2", 00:18:08.004 "trsvcid": "4420" 00:18:08.004 }, 00:18:08.004 "peer_address": { 00:18:08.004 "trtype": "TCP", 00:18:08.004 "adrfam": "IPv4", 00:18:08.004 "traddr": "10.0.0.1", 00:18:08.004 "trsvcid": "52610" 00:18:08.004 }, 00:18:08.004 "auth": { 00:18:08.004 "state": "completed", 00:18:08.004 "digest": "sha256", 00:18:08.004 "dhgroup": "null" 00:18:08.004 } 00:18:08.004 } 00:18:08.004 ]' 00:18:08.004 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.004 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.004 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.262 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.262 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.262 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.262 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.262 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.520 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:08.520 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.454 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.713 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.280 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.280 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.538 { 00:18:10.538 "cntlid": 3, 00:18:10.538 "qid": 0, 00:18:10.538 "state": "enabled", 00:18:10.538 "thread": "nvmf_tgt_poll_group_000", 00:18:10.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:10.538 "listen_address": { 00:18:10.538 "trtype": "TCP", 00:18:10.538 "adrfam": "IPv4", 00:18:10.538 "traddr": "10.0.0.2", 00:18:10.538 "trsvcid": "4420" 00:18:10.538 }, 00:18:10.538 "peer_address": { 00:18:10.538 "trtype": "TCP", 00:18:10.538 "adrfam": "IPv4", 00:18:10.538 "traddr": "10.0.0.1", 00:18:10.538 "trsvcid": "52644" 00:18:10.538 }, 00:18:10.538 "auth": { 00:18:10.538 "state": "completed", 00:18:10.538 "digest": "sha256", 00:18:10.538 "dhgroup": "null" 00:18:10.538 } 00:18:10.538 } 00:18:10.538 ]' 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.538 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.796 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:10.796 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:11.731 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.297 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.555 00:18:12.555 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.555 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.555 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.813 { 00:18:12.813 "cntlid": 5, 00:18:12.813 "qid": 0, 00:18:12.813 "state": "enabled", 00:18:12.813 "thread": "nvmf_tgt_poll_group_000", 00:18:12.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:12.813 "listen_address": { 00:18:12.813 "trtype": "TCP", 00:18:12.813 "adrfam": "IPv4", 00:18:12.813 "traddr": "10.0.0.2", 00:18:12.813 "trsvcid": "4420" 00:18:12.813 }, 00:18:12.813 "peer_address": { 00:18:12.813 "trtype": "TCP", 00:18:12.813 "adrfam": "IPv4", 00:18:12.813 "traddr": "10.0.0.1", 00:18:12.813 "trsvcid": "44256" 00:18:12.813 }, 00:18:12.813 "auth": { 00:18:12.813 "state": "completed", 00:18:12.813 "digest": "sha256", 00:18:12.813 "dhgroup": "null" 00:18:12.813 } 00:18:12.813 } 00:18:12.813 ]' 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.813 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.071 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.071 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.071 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.329 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:13.329 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.262 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.520 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.778 00:18:14.778 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.778 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.778 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.036 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.036 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.036 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.036 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.036 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.036 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.036 { 00:18:15.036 "cntlid": 7, 00:18:15.036 "qid": 0, 00:18:15.036 "state": "enabled", 00:18:15.036 "thread": "nvmf_tgt_poll_group_000", 00:18:15.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:15.036 "listen_address": { 00:18:15.036 "trtype": "TCP", 00:18:15.036 "adrfam": "IPv4", 00:18:15.036 "traddr": "10.0.0.2", 00:18:15.036 "trsvcid": "4420" 00:18:15.036 }, 00:18:15.036 "peer_address": { 00:18:15.036 "trtype": "TCP", 00:18:15.036 "adrfam": "IPv4", 00:18:15.036 "traddr": "10.0.0.1", 00:18:15.036 "trsvcid": "44278" 00:18:15.036 }, 00:18:15.036 "auth": { 00:18:15.036 "state": "completed", 00:18:15.036 "digest": "sha256", 00:18:15.036 "dhgroup": "null" 00:18:15.036 } 00:18:15.036 } 00:18:15.036 ]' 00:18:15.036 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.295 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.553 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:15.553 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.486 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.745 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.310 00:18:17.310 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.310 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.310 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.568 { 00:18:17.568 "cntlid": 9, 00:18:17.568 "qid": 0, 00:18:17.568 "state": "enabled", 00:18:17.568 "thread": "nvmf_tgt_poll_group_000", 00:18:17.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:17.568 "listen_address": { 00:18:17.568 "trtype": "TCP", 00:18:17.568 "adrfam": "IPv4", 00:18:17.568 "traddr": "10.0.0.2", 00:18:17.568 "trsvcid": "4420" 00:18:17.568 }, 00:18:17.568 "peer_address": { 00:18:17.568 "trtype": "TCP", 00:18:17.568 "adrfam": "IPv4", 00:18:17.568 "traddr": "10.0.0.1", 00:18:17.568 "trsvcid": "44298" 00:18:17.568 }, 00:18:17.568 "auth": { 00:18:17.568 "state": "completed", 00:18:17.568 "digest": "sha256", 00:18:17.568 "dhgroup": "ffdhe2048" 00:18:17.568 } 00:18:17.568 } 00:18:17.568 ]' 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.568 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.826 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:17.826 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.760 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.018 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.276 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.276 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.277 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.277 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.535 00:18:19.535 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.535 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.535 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.793 { 00:18:19.793 "cntlid": 11, 00:18:19.793 "qid": 0, 00:18:19.793 "state": "enabled", 00:18:19.793 "thread": "nvmf_tgt_poll_group_000", 00:18:19.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:19.793 "listen_address": { 00:18:19.793 "trtype": "TCP", 00:18:19.793 "adrfam": "IPv4", 00:18:19.793 "traddr": "10.0.0.2", 00:18:19.793 "trsvcid": "4420" 00:18:19.793 }, 00:18:19.793 "peer_address": { 00:18:19.793 "trtype": "TCP", 00:18:19.793 "adrfam": "IPv4", 00:18:19.793 "traddr": "10.0.0.1", 00:18:19.793 "trsvcid": "44324" 00:18:19.793 }, 00:18:19.793 "auth": { 00:18:19.793 "state": "completed", 00:18:19.793 "digest": "sha256", 00:18:19.793 "dhgroup": "ffdhe2048" 00:18:19.793 } 00:18:19.793 } 00:18:19.793 ]' 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.793 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.051 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.051 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.051 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.051 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.310 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:20.310 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.243 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.501 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:21.501 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.501 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:21.501 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.501 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.502 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.760 00:18:21.760 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.760 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.760 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.018 { 00:18:22.018 "cntlid": 13, 00:18:22.018 "qid": 0, 00:18:22.018 "state": "enabled", 00:18:22.018 "thread": "nvmf_tgt_poll_group_000", 00:18:22.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:22.018 "listen_address": { 00:18:22.018 "trtype": "TCP", 00:18:22.018 "adrfam": "IPv4", 00:18:22.018 "traddr": "10.0.0.2", 00:18:22.018 "trsvcid": "4420" 00:18:22.018 }, 00:18:22.018 "peer_address": { 00:18:22.018 "trtype": "TCP", 00:18:22.018 "adrfam": "IPv4", 00:18:22.018 "traddr": "10.0.0.1", 00:18:22.018 "trsvcid": "46748" 00:18:22.018 }, 00:18:22.018 "auth": { 00:18:22.018 "state": "completed", 00:18:22.018 "digest": "sha256", 00:18:22.018 "dhgroup": "ffdhe2048" 00:18:22.018 } 00:18:22.018 } 00:18:22.018 ]' 00:18:22.018 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.275 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.532 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:22.532 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.466 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.724 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.290 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.290 { 00:18:24.290 "cntlid": 15, 00:18:24.290 "qid": 0, 00:18:24.290 "state": "enabled", 00:18:24.290 "thread": "nvmf_tgt_poll_group_000", 00:18:24.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:24.290 "listen_address": { 00:18:24.290 "trtype": "TCP", 00:18:24.290 "adrfam": "IPv4", 00:18:24.290 "traddr": "10.0.0.2", 00:18:24.290 "trsvcid": "4420" 00:18:24.290 }, 00:18:24.290 "peer_address": { 00:18:24.290 "trtype": "TCP", 00:18:24.290 "adrfam": "IPv4", 00:18:24.290 "traddr": "10.0.0.1", 00:18:24.290 "trsvcid": "46786" 00:18:24.290 }, 00:18:24.290 "auth": { 00:18:24.290 "state": "completed", 00:18:24.290 "digest": "sha256", 00:18:24.290 "dhgroup": "ffdhe2048" 00:18:24.290 } 00:18:24.290 } 00:18:24.290 ]' 00:18:24.290 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.548 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.806 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:24.806 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.740 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.998 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.999 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.999 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.999 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.999 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.999 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.999 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.565 00:18:26.565 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.565 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.565 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.823 { 00:18:26.823 "cntlid": 17, 00:18:26.823 "qid": 0, 00:18:26.823 "state": "enabled", 00:18:26.823 "thread": "nvmf_tgt_poll_group_000", 00:18:26.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:26.823 "listen_address": { 00:18:26.823 "trtype": "TCP", 00:18:26.823 "adrfam": "IPv4", 00:18:26.823 "traddr": "10.0.0.2", 00:18:26.823 "trsvcid": "4420" 00:18:26.823 }, 00:18:26.823 "peer_address": { 00:18:26.823 "trtype": "TCP", 00:18:26.823 "adrfam": "IPv4", 00:18:26.823 "traddr": "10.0.0.1", 00:18:26.823 "trsvcid": "46810" 00:18:26.823 }, 00:18:26.823 "auth": { 00:18:26.823 "state": "completed", 00:18:26.823 "digest": "sha256", 00:18:26.823 "dhgroup": "ffdhe3072" 00:18:26.823 } 00:18:26.823 } 00:18:26.823 ]' 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.823 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.082 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:27.082 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.016 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.274 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.532 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.532 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.532 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.532 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.790 00:18:28.790 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.790 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.790 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.048 { 00:18:29.048 "cntlid": 19, 00:18:29.048 "qid": 0, 00:18:29.048 "state": "enabled", 00:18:29.048 "thread": "nvmf_tgt_poll_group_000", 00:18:29.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:29.048 "listen_address": { 00:18:29.048 "trtype": "TCP", 00:18:29.048 "adrfam": "IPv4", 00:18:29.048 "traddr": "10.0.0.2", 00:18:29.048 "trsvcid": "4420" 00:18:29.048 }, 00:18:29.048 "peer_address": { 00:18:29.048 "trtype": "TCP", 00:18:29.048 "adrfam": "IPv4", 00:18:29.048 "traddr": "10.0.0.1", 00:18:29.048 "trsvcid": "46840" 00:18:29.048 }, 00:18:29.048 "auth": { 00:18:29.048 "state": "completed", 00:18:29.048 "digest": "sha256", 00:18:29.048 "dhgroup": "ffdhe3072" 00:18:29.048 } 00:18:29.048 } 00:18:29.048 ]' 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.048 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.048 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.048 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.313 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.313 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.313 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.639 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:29.639 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:30.597 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.855 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.113 00:18:31.113 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.113 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.113 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.371 { 00:18:31.371 "cntlid": 21, 00:18:31.371 "qid": 0, 00:18:31.371 "state": "enabled", 00:18:31.371 "thread": "nvmf_tgt_poll_group_000", 00:18:31.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:31.371 "listen_address": { 00:18:31.371 "trtype": "TCP", 00:18:31.371 "adrfam": "IPv4", 00:18:31.371 "traddr": "10.0.0.2", 00:18:31.371 "trsvcid": "4420" 00:18:31.371 }, 00:18:31.371 "peer_address": { 00:18:31.371 "trtype": "TCP", 00:18:31.371 "adrfam": "IPv4", 00:18:31.371 "traddr": "10.0.0.1", 00:18:31.371 "trsvcid": "33692" 00:18:31.371 }, 00:18:31.371 "auth": { 00:18:31.371 "state": "completed", 00:18:31.371 "digest": "sha256", 00:18:31.371 "dhgroup": "ffdhe3072" 00:18:31.371 } 00:18:31.371 } 00:18:31.371 ]' 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.371 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.629 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.629 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.629 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.888 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:31.888 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:32.821 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.079 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.336 00:18:33.337 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.337 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.337 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.594 { 00:18:33.594 "cntlid": 23, 00:18:33.594 "qid": 0, 00:18:33.594 "state": "enabled", 00:18:33.594 "thread": "nvmf_tgt_poll_group_000", 00:18:33.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:33.594 "listen_address": { 00:18:33.594 "trtype": "TCP", 00:18:33.594 "adrfam": "IPv4", 00:18:33.594 "traddr": "10.0.0.2", 00:18:33.594 "trsvcid": "4420" 00:18:33.594 }, 00:18:33.594 "peer_address": { 00:18:33.594 "trtype": "TCP", 00:18:33.594 "adrfam": "IPv4", 00:18:33.594 "traddr": "10.0.0.1", 00:18:33.594 "trsvcid": "33724" 00:18:33.594 }, 00:18:33.594 "auth": { 00:18:33.594 "state": "completed", 00:18:33.594 "digest": "sha256", 00:18:33.594 "dhgroup": "ffdhe3072" 00:18:33.594 } 00:18:33.594 } 00:18:33.594 ]' 00:18:33.594 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.852 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.110 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:34.110 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.043 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.301 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.867 00:18:35.867 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.867 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.867 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.125 { 00:18:36.125 "cntlid": 25, 00:18:36.125 "qid": 0, 00:18:36.125 "state": "enabled", 00:18:36.125 "thread": "nvmf_tgt_poll_group_000", 00:18:36.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:36.125 "listen_address": { 00:18:36.125 "trtype": "TCP", 00:18:36.125 "adrfam": "IPv4", 00:18:36.125 "traddr": "10.0.0.2", 00:18:36.125 "trsvcid": "4420" 00:18:36.125 }, 00:18:36.125 "peer_address": { 00:18:36.125 "trtype": "TCP", 00:18:36.125 "adrfam": "IPv4", 00:18:36.125 "traddr": "10.0.0.1", 00:18:36.125 "trsvcid": "33764" 00:18:36.125 }, 00:18:36.125 "auth": { 00:18:36.125 "state": "completed", 00:18:36.125 "digest": "sha256", 00:18:36.125 "dhgroup": "ffdhe4096" 00:18:36.125 } 00:18:36.125 } 00:18:36.125 ]' 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.125 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.125 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.125 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.125 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.125 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.125 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.384 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:36.384 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.756 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.320 00:18:38.320 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.320 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.320 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.578 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.578 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.578 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.579 { 00:18:38.579 "cntlid": 27, 00:18:38.579 "qid": 0, 00:18:38.579 "state": "enabled", 00:18:38.579 "thread": "nvmf_tgt_poll_group_000", 00:18:38.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:38.579 "listen_address": { 00:18:38.579 "trtype": "TCP", 00:18:38.579 "adrfam": "IPv4", 00:18:38.579 "traddr": "10.0.0.2", 00:18:38.579 "trsvcid": "4420" 00:18:38.579 }, 00:18:38.579 "peer_address": { 00:18:38.579 "trtype": "TCP", 00:18:38.579 "adrfam": "IPv4", 00:18:38.579 "traddr": "10.0.0.1", 00:18:38.579 "trsvcid": "33792" 00:18:38.579 }, 00:18:38.579 "auth": { 00:18:38.579 "state": "completed", 00:18:38.579 "digest": "sha256", 00:18:38.579 "dhgroup": "ffdhe4096" 00:18:38.579 } 00:18:38.579 } 00:18:38.579 ]' 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.579 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.837 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:38.837 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:39.769 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.335 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.593 00:18:40.593 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.593 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.593 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.852 { 00:18:40.852 "cntlid": 29, 00:18:40.852 "qid": 0, 00:18:40.852 "state": "enabled", 00:18:40.852 "thread": "nvmf_tgt_poll_group_000", 00:18:40.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:40.852 "listen_address": { 00:18:40.852 "trtype": "TCP", 00:18:40.852 "adrfam": "IPv4", 00:18:40.852 "traddr": "10.0.0.2", 00:18:40.852 "trsvcid": "4420" 00:18:40.852 }, 00:18:40.852 "peer_address": { 00:18:40.852 "trtype": "TCP", 00:18:40.852 "adrfam": "IPv4", 00:18:40.852 "traddr": "10.0.0.1", 00:18:40.852 "trsvcid": "56314" 00:18:40.852 }, 00:18:40.852 "auth": { 00:18:40.852 "state": "completed", 00:18:40.852 "digest": "sha256", 00:18:40.852 "dhgroup": "ffdhe4096" 00:18:40.852 } 00:18:40.852 } 00:18:40.852 ]' 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.852 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.418 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:41.418 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.351 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:42.608 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.609 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.609 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.609 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.609 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.609 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.866 00:18:42.866 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.866 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.866 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.431 { 00:18:43.431 "cntlid": 31, 00:18:43.431 "qid": 0, 00:18:43.431 "state": "enabled", 00:18:43.431 "thread": "nvmf_tgt_poll_group_000", 00:18:43.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:43.431 "listen_address": { 00:18:43.431 "trtype": "TCP", 00:18:43.431 "adrfam": "IPv4", 00:18:43.431 "traddr": "10.0.0.2", 00:18:43.431 "trsvcid": "4420" 00:18:43.431 }, 00:18:43.431 "peer_address": { 00:18:43.431 "trtype": "TCP", 00:18:43.431 "adrfam": "IPv4", 00:18:43.431 "traddr": "10.0.0.1", 00:18:43.431 "trsvcid": "56342" 00:18:43.431 }, 00:18:43.431 "auth": { 00:18:43.431 "state": "completed", 00:18:43.431 "digest": "sha256", 00:18:43.431 "dhgroup": "ffdhe4096" 00:18:43.431 } 00:18:43.431 } 00:18:43.431 ]' 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.431 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.689 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:43.689 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.622 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.880 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.814 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.814 { 00:18:45.814 "cntlid": 33, 00:18:45.814 "qid": 0, 00:18:45.814 "state": "enabled", 00:18:45.814 "thread": "nvmf_tgt_poll_group_000", 00:18:45.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:45.814 "listen_address": { 00:18:45.814 "trtype": "TCP", 00:18:45.814 "adrfam": "IPv4", 00:18:45.814 "traddr": "10.0.0.2", 00:18:45.814 "trsvcid": "4420" 00:18:45.814 }, 00:18:45.814 "peer_address": { 00:18:45.814 "trtype": "TCP", 00:18:45.814 "adrfam": "IPv4", 00:18:45.814 "traddr": "10.0.0.1", 00:18:45.814 "trsvcid": "56366" 00:18:45.814 }, 00:18:45.814 "auth": { 00:18:45.814 "state": "completed", 00:18:45.814 "digest": "sha256", 00:18:45.814 "dhgroup": "ffdhe6144" 00:18:45.814 } 00:18:45.814 } 00:18:45.814 ]' 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.814 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.072 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.072 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.072 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.072 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.072 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.330 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:46.330 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:47.263 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.521 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.087 00:18:48.087 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.087 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.087 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.345 { 00:18:48.345 "cntlid": 35, 00:18:48.345 "qid": 0, 00:18:48.345 "state": "enabled", 00:18:48.345 "thread": "nvmf_tgt_poll_group_000", 00:18:48.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:48.345 "listen_address": { 00:18:48.345 "trtype": "TCP", 00:18:48.345 "adrfam": "IPv4", 00:18:48.345 "traddr": "10.0.0.2", 00:18:48.345 "trsvcid": "4420" 00:18:48.345 }, 00:18:48.345 "peer_address": { 00:18:48.345 "trtype": "TCP", 00:18:48.345 "adrfam": "IPv4", 00:18:48.345 "traddr": "10.0.0.1", 00:18:48.345 "trsvcid": "56392" 00:18:48.345 }, 00:18:48.345 "auth": { 00:18:48.345 "state": "completed", 00:18:48.345 "digest": "sha256", 00:18:48.345 "dhgroup": "ffdhe6144" 00:18:48.345 } 00:18:48.345 } 00:18:48.345 ]' 00:18:48.345 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.603 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.861 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:48.861 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.796 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:49.797 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.055 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.620 00:18:50.620 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.620 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.621 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.879 { 00:18:50.879 "cntlid": 37, 00:18:50.879 "qid": 0, 00:18:50.879 "state": "enabled", 00:18:50.879 "thread": "nvmf_tgt_poll_group_000", 00:18:50.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:50.879 "listen_address": { 00:18:50.879 "trtype": "TCP", 00:18:50.879 "adrfam": "IPv4", 00:18:50.879 "traddr": "10.0.0.2", 00:18:50.879 "trsvcid": "4420" 00:18:50.879 }, 00:18:50.879 "peer_address": { 00:18:50.879 "trtype": "TCP", 00:18:50.879 "adrfam": "IPv4", 00:18:50.879 "traddr": "10.0.0.1", 00:18:50.879 "trsvcid": "45376" 00:18:50.879 }, 00:18:50.879 "auth": { 00:18:50.879 "state": "completed", 00:18:50.879 "digest": "sha256", 00:18:50.879 "dhgroup": "ffdhe6144" 00:18:50.879 } 00:18:50.879 } 00:18:50.879 ]' 00:18:50.879 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.138 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.396 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:51.396 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.330 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.588 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.153 00:18:53.411 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.411 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.411 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.669 { 00:18:53.669 "cntlid": 39, 00:18:53.669 "qid": 0, 00:18:53.669 "state": "enabled", 00:18:53.669 "thread": "nvmf_tgt_poll_group_000", 00:18:53.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:53.669 "listen_address": { 00:18:53.669 "trtype": "TCP", 00:18:53.669 "adrfam": "IPv4", 00:18:53.669 "traddr": "10.0.0.2", 00:18:53.669 "trsvcid": "4420" 00:18:53.669 }, 00:18:53.669 "peer_address": { 00:18:53.669 "trtype": "TCP", 00:18:53.669 "adrfam": "IPv4", 00:18:53.669 "traddr": "10.0.0.1", 00:18:53.669 "trsvcid": "45408" 00:18:53.669 }, 00:18:53.669 "auth": { 00:18:53.669 "state": "completed", 00:18:53.669 "digest": "sha256", 00:18:53.669 "dhgroup": "ffdhe6144" 00:18:53.669 } 00:18:53.669 } 00:18:53.669 ]' 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.669 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.927 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:53.927 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:54.860 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.119 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.053 00:18:56.053 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.053 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.053 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.311 { 00:18:56.311 "cntlid": 41, 00:18:56.311 "qid": 0, 00:18:56.311 "state": "enabled", 00:18:56.311 "thread": "nvmf_tgt_poll_group_000", 00:18:56.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:56.311 "listen_address": { 00:18:56.311 "trtype": "TCP", 00:18:56.311 "adrfam": "IPv4", 00:18:56.311 "traddr": "10.0.0.2", 00:18:56.311 "trsvcid": "4420" 00:18:56.311 }, 00:18:56.311 "peer_address": { 00:18:56.311 "trtype": "TCP", 00:18:56.311 "adrfam": "IPv4", 00:18:56.311 "traddr": "10.0.0.1", 00:18:56.311 "trsvcid": "45438" 00:18:56.311 }, 00:18:56.311 "auth": { 00:18:56.311 "state": "completed", 00:18:56.311 "digest": "sha256", 00:18:56.311 "dhgroup": "ffdhe8192" 00:18:56.311 } 00:18:56.311 } 00:18:56.311 ]' 00:18:56.311 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.569 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.826 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:56.826 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.760 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.761 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.019 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.019 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.019 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.019 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.019 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.953 00:18:58.953 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.953 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.953 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.211 { 00:18:59.211 "cntlid": 43, 00:18:59.211 "qid": 0, 00:18:59.211 "state": "enabled", 00:18:59.211 "thread": "nvmf_tgt_poll_group_000", 00:18:59.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:59.211 "listen_address": { 00:18:59.211 "trtype": "TCP", 00:18:59.211 "adrfam": "IPv4", 00:18:59.211 "traddr": "10.0.0.2", 00:18:59.211 "trsvcid": "4420" 00:18:59.211 }, 00:18:59.211 "peer_address": { 00:18:59.211 "trtype": "TCP", 00:18:59.211 "adrfam": "IPv4", 00:18:59.211 "traddr": "10.0.0.1", 00:18:59.211 "trsvcid": "45462" 00:18:59.211 }, 00:18:59.211 "auth": { 00:18:59.211 "state": "completed", 00:18:59.211 "digest": "sha256", 00:18:59.211 "dhgroup": "ffdhe8192" 00:18:59.211 } 00:18:59.211 } 00:18:59.211 ]' 00:18:59.211 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.469 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.729 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:18:59.729 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.717 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.975 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.908 00:19:01.908 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.908 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.908 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.166 { 00:19:02.166 "cntlid": 45, 00:19:02.166 "qid": 0, 00:19:02.166 "state": "enabled", 00:19:02.166 "thread": "nvmf_tgt_poll_group_000", 00:19:02.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.166 "listen_address": { 00:19:02.166 "trtype": "TCP", 00:19:02.166 "adrfam": "IPv4", 00:19:02.166 "traddr": "10.0.0.2", 00:19:02.166 "trsvcid": "4420" 00:19:02.166 }, 00:19:02.166 "peer_address": { 00:19:02.166 "trtype": "TCP", 00:19:02.166 "adrfam": "IPv4", 00:19:02.166 "traddr": "10.0.0.1", 00:19:02.166 "trsvcid": "56908" 00:19:02.166 }, 00:19:02.166 "auth": { 00:19:02.166 "state": "completed", 00:19:02.166 "digest": "sha256", 00:19:02.166 "dhgroup": "ffdhe8192" 00:19:02.166 } 00:19:02.166 } 00:19:02.166 ]' 00:19:02.166 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.424 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.681 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:02.681 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.615 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.873 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.806 00:19:04.806 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.806 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.806 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.064 { 00:19:05.064 "cntlid": 47, 00:19:05.064 "qid": 0, 00:19:05.064 "state": "enabled", 00:19:05.064 "thread": "nvmf_tgt_poll_group_000", 00:19:05.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:05.064 "listen_address": { 00:19:05.064 "trtype": "TCP", 00:19:05.064 "adrfam": "IPv4", 00:19:05.064 "traddr": "10.0.0.2", 00:19:05.064 "trsvcid": "4420" 00:19:05.064 }, 00:19:05.064 "peer_address": { 00:19:05.064 "trtype": "TCP", 00:19:05.064 "adrfam": "IPv4", 00:19:05.064 "traddr": "10.0.0.1", 00:19:05.064 "trsvcid": "56930" 00:19:05.064 }, 00:19:05.064 "auth": { 00:19:05.064 "state": "completed", 00:19:05.064 "digest": "sha256", 00:19:05.064 "dhgroup": "ffdhe8192" 00:19:05.064 } 00:19:05.064 } 00:19:05.064 ]' 00:19:05.064 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.064 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.064 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.064 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.064 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.322 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.322 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.322 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.580 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:05.580 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.514 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.772 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.339 00:19:07.339 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.339 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.339 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.597 { 00:19:07.597 "cntlid": 49, 00:19:07.597 "qid": 0, 00:19:07.597 "state": "enabled", 00:19:07.597 "thread": "nvmf_tgt_poll_group_000", 00:19:07.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:07.597 "listen_address": { 00:19:07.597 "trtype": "TCP", 00:19:07.597 "adrfam": "IPv4", 00:19:07.597 "traddr": "10.0.0.2", 00:19:07.597 "trsvcid": "4420" 00:19:07.597 }, 00:19:07.597 "peer_address": { 00:19:07.597 "trtype": "TCP", 00:19:07.597 "adrfam": "IPv4", 00:19:07.597 "traddr": "10.0.0.1", 00:19:07.597 "trsvcid": "56956" 00:19:07.597 }, 00:19:07.597 "auth": { 00:19:07.597 "state": "completed", 00:19:07.597 "digest": "sha384", 00:19:07.597 "dhgroup": "null" 00:19:07.597 } 00:19:07.597 } 00:19:07.597 ]' 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.597 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.856 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:07.856 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:08.789 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.789 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.789 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.789 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.789 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.789 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.790 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.790 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.047 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.613 00:19:09.613 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.613 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.613 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.871 { 00:19:09.871 "cntlid": 51, 00:19:09.871 "qid": 0, 00:19:09.871 "state": "enabled", 00:19:09.871 "thread": "nvmf_tgt_poll_group_000", 00:19:09.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.871 "listen_address": { 00:19:09.871 "trtype": "TCP", 00:19:09.871 "adrfam": "IPv4", 00:19:09.871 "traddr": "10.0.0.2", 00:19:09.871 "trsvcid": "4420" 00:19:09.871 }, 00:19:09.871 "peer_address": { 00:19:09.871 "trtype": "TCP", 00:19:09.871 "adrfam": "IPv4", 00:19:09.871 "traddr": "10.0.0.1", 00:19:09.871 "trsvcid": "56980" 00:19:09.871 }, 00:19:09.871 "auth": { 00:19:09.871 "state": "completed", 00:19:09.871 "digest": "sha384", 00:19:09.871 "dhgroup": "null" 00:19:09.871 } 00:19:09.871 } 00:19:09.871 ]' 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.871 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.129 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:10.129 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.062 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.628 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.886 00:19:11.886 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.886 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.886 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.144 { 00:19:12.144 "cntlid": 53, 00:19:12.144 "qid": 0, 00:19:12.144 "state": "enabled", 00:19:12.144 "thread": "nvmf_tgt_poll_group_000", 00:19:12.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:12.144 "listen_address": { 00:19:12.144 "trtype": "TCP", 00:19:12.144 "adrfam": "IPv4", 00:19:12.144 "traddr": "10.0.0.2", 00:19:12.144 "trsvcid": "4420" 00:19:12.144 }, 00:19:12.144 "peer_address": { 00:19:12.144 "trtype": "TCP", 00:19:12.144 "adrfam": "IPv4", 00:19:12.144 "traddr": "10.0.0.1", 00:19:12.144 "trsvcid": "32978" 00:19:12.144 }, 00:19:12.144 "auth": { 00:19:12.144 "state": "completed", 00:19:12.144 "digest": "sha384", 00:19:12.144 "dhgroup": "null" 00:19:12.144 } 00:19:12.144 } 00:19:12.144 ]' 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.144 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.402 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.402 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.402 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.660 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:12.660 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.591 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.849 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.107 00:19:14.107 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.107 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.107 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.369 { 00:19:14.369 "cntlid": 55, 00:19:14.369 "qid": 0, 00:19:14.369 "state": "enabled", 00:19:14.369 "thread": "nvmf_tgt_poll_group_000", 00:19:14.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:14.369 "listen_address": { 00:19:14.369 "trtype": "TCP", 00:19:14.369 "adrfam": "IPv4", 00:19:14.369 "traddr": "10.0.0.2", 00:19:14.369 "trsvcid": "4420" 00:19:14.369 }, 00:19:14.369 "peer_address": { 00:19:14.369 "trtype": "TCP", 00:19:14.369 "adrfam": "IPv4", 00:19:14.369 "traddr": "10.0.0.1", 00:19:14.369 "trsvcid": "33012" 00:19:14.369 }, 00:19:14.369 "auth": { 00:19:14.369 "state": "completed", 00:19:14.369 "digest": "sha384", 00:19:14.369 "dhgroup": "null" 00:19:14.369 } 00:19:14.369 } 00:19:14.369 ]' 00:19:14.369 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.631 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.889 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:14.889 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:15.822 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.823 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.389 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.647 00:19:16.647 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.647 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.647 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.904 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.904 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.904 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.905 { 00:19:16.905 "cntlid": 57, 00:19:16.905 "qid": 0, 00:19:16.905 "state": "enabled", 00:19:16.905 "thread": "nvmf_tgt_poll_group_000", 00:19:16.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.905 "listen_address": { 00:19:16.905 "trtype": "TCP", 00:19:16.905 "adrfam": "IPv4", 00:19:16.905 "traddr": "10.0.0.2", 00:19:16.905 "trsvcid": "4420" 00:19:16.905 }, 00:19:16.905 "peer_address": { 00:19:16.905 "trtype": "TCP", 00:19:16.905 "adrfam": "IPv4", 00:19:16.905 "traddr": "10.0.0.1", 00:19:16.905 "trsvcid": "33046" 00:19:16.905 }, 00:19:16.905 "auth": { 00:19:16.905 "state": "completed", 00:19:16.905 "digest": "sha384", 00:19:16.905 "dhgroup": "ffdhe2048" 00:19:16.905 } 00:19:16.905 } 00:19:16.905 ]' 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.905 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.163 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:17.163 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:18.096 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.096 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.096 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.096 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.353 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.353 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.353 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.353 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.610 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.867 00:19:18.867 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.867 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.867 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.125 { 00:19:19.125 "cntlid": 59, 00:19:19.125 "qid": 0, 00:19:19.125 "state": "enabled", 00:19:19.125 "thread": "nvmf_tgt_poll_group_000", 00:19:19.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.125 "listen_address": { 00:19:19.125 "trtype": "TCP", 00:19:19.125 "adrfam": "IPv4", 00:19:19.125 "traddr": "10.0.0.2", 00:19:19.125 "trsvcid": "4420" 00:19:19.125 }, 00:19:19.125 "peer_address": { 00:19:19.125 "trtype": "TCP", 00:19:19.125 "adrfam": "IPv4", 00:19:19.125 "traddr": "10.0.0.1", 00:19:19.125 "trsvcid": "33088" 00:19:19.125 }, 00:19:19.125 "auth": { 00:19:19.125 "state": "completed", 00:19:19.125 "digest": "sha384", 00:19:19.125 "dhgroup": "ffdhe2048" 00:19:19.125 } 00:19:19.125 } 00:19:19.125 ]' 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.125 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.383 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.383 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.383 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.383 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.383 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.641 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:19.641 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.574 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.832 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.090 00:19:21.090 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.090 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.090 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.348 { 00:19:21.348 "cntlid": 61, 00:19:21.348 "qid": 0, 00:19:21.348 "state": "enabled", 00:19:21.348 "thread": "nvmf_tgt_poll_group_000", 00:19:21.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.348 "listen_address": { 00:19:21.348 "trtype": "TCP", 00:19:21.348 "adrfam": "IPv4", 00:19:21.348 "traddr": "10.0.0.2", 00:19:21.348 "trsvcid": "4420" 00:19:21.348 }, 00:19:21.348 "peer_address": { 00:19:21.348 "trtype": "TCP", 00:19:21.348 "adrfam": "IPv4", 00:19:21.348 "traddr": "10.0.0.1", 00:19:21.348 "trsvcid": "34654" 00:19:21.348 }, 00:19:21.348 "auth": { 00:19:21.348 "state": "completed", 00:19:21.348 "digest": "sha384", 00:19:21.348 "dhgroup": "ffdhe2048" 00:19:21.348 } 00:19:21.348 } 00:19:21.348 ]' 00:19:21.348 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.606 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.607 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.607 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.607 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.607 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.607 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.607 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.865 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:21.865 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.798 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.056 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.621 00:19:23.621 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.621 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.621 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.880 { 00:19:23.880 "cntlid": 63, 00:19:23.880 "qid": 0, 00:19:23.880 "state": "enabled", 00:19:23.880 "thread": "nvmf_tgt_poll_group_000", 00:19:23.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.880 "listen_address": { 00:19:23.880 "trtype": "TCP", 00:19:23.880 "adrfam": "IPv4", 00:19:23.880 "traddr": "10.0.0.2", 00:19:23.880 "trsvcid": "4420" 00:19:23.880 }, 00:19:23.880 "peer_address": { 00:19:23.880 "trtype": "TCP", 00:19:23.880 "adrfam": "IPv4", 00:19:23.880 "traddr": "10.0.0.1", 00:19:23.880 "trsvcid": "34668" 00:19:23.880 }, 00:19:23.880 "auth": { 00:19:23.880 "state": "completed", 00:19:23.880 "digest": "sha384", 00:19:23.880 "dhgroup": "ffdhe2048" 00:19:23.880 } 00:19:23.880 } 00:19:23.880 ]' 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.880 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.138 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:24.139 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.079 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.645 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.903 00:19:25.903 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.903 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.903 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.161 { 00:19:26.161 "cntlid": 65, 00:19:26.161 "qid": 0, 00:19:26.161 "state": "enabled", 00:19:26.161 "thread": "nvmf_tgt_poll_group_000", 00:19:26.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.161 "listen_address": { 00:19:26.161 "trtype": "TCP", 00:19:26.161 "adrfam": "IPv4", 00:19:26.161 "traddr": "10.0.0.2", 00:19:26.161 "trsvcid": "4420" 00:19:26.161 }, 00:19:26.161 "peer_address": { 00:19:26.161 "trtype": "TCP", 00:19:26.161 "adrfam": "IPv4", 00:19:26.161 "traddr": "10.0.0.1", 00:19:26.161 "trsvcid": "34706" 00:19:26.161 }, 00:19:26.161 "auth": { 00:19:26.161 "state": "completed", 00:19:26.161 "digest": "sha384", 00:19:26.161 "dhgroup": "ffdhe3072" 00:19:26.161 } 00:19:26.161 } 00:19:26.161 ]' 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.161 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.728 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:26.728 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:27.661 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.661 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.662 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.662 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.662 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.662 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.662 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.662 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.920 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.178 00:19:28.178 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.178 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.178 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.436 { 00:19:28.436 "cntlid": 67, 00:19:28.436 "qid": 0, 00:19:28.436 "state": "enabled", 00:19:28.436 "thread": "nvmf_tgt_poll_group_000", 00:19:28.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.436 "listen_address": { 00:19:28.436 "trtype": "TCP", 00:19:28.436 "adrfam": "IPv4", 00:19:28.436 "traddr": "10.0.0.2", 00:19:28.436 "trsvcid": "4420" 00:19:28.436 }, 00:19:28.436 "peer_address": { 00:19:28.436 "trtype": "TCP", 00:19:28.436 "adrfam": "IPv4", 00:19:28.436 "traddr": "10.0.0.1", 00:19:28.436 "trsvcid": "34720" 00:19:28.436 }, 00:19:28.436 "auth": { 00:19:28.436 "state": "completed", 00:19:28.436 "digest": "sha384", 00:19:28.436 "dhgroup": "ffdhe3072" 00:19:28.436 } 00:19:28.436 } 00:19:28.436 ]' 00:19:28.436 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.694 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.952 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:28.952 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.916 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.215 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.498 00:19:30.756 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.756 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.756 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.014 { 00:19:31.014 "cntlid": 69, 00:19:31.014 "qid": 0, 00:19:31.014 "state": "enabled", 00:19:31.014 "thread": "nvmf_tgt_poll_group_000", 00:19:31.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.014 "listen_address": { 00:19:31.014 "trtype": "TCP", 00:19:31.014 "adrfam": "IPv4", 00:19:31.014 "traddr": "10.0.0.2", 00:19:31.014 "trsvcid": "4420" 00:19:31.014 }, 00:19:31.014 "peer_address": { 00:19:31.014 "trtype": "TCP", 00:19:31.014 "adrfam": "IPv4", 00:19:31.014 "traddr": "10.0.0.1", 00:19:31.014 "trsvcid": "37170" 00:19:31.014 }, 00:19:31.014 "auth": { 00:19:31.014 "state": "completed", 00:19:31.014 "digest": "sha384", 00:19:31.014 "dhgroup": "ffdhe3072" 00:19:31.014 } 00:19:31.014 } 00:19:31.014 ]' 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.014 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.273 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:31.273 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.207 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.465 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.466 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.466 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.466 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.032 00:19:33.032 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.032 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.032 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.290 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.290 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.290 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.290 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.290 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.291 { 00:19:33.291 "cntlid": 71, 00:19:33.291 "qid": 0, 00:19:33.291 "state": "enabled", 00:19:33.291 "thread": "nvmf_tgt_poll_group_000", 00:19:33.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.291 "listen_address": { 00:19:33.291 "trtype": "TCP", 00:19:33.291 "adrfam": "IPv4", 00:19:33.291 "traddr": "10.0.0.2", 00:19:33.291 "trsvcid": "4420" 00:19:33.291 }, 00:19:33.291 "peer_address": { 00:19:33.291 "trtype": "TCP", 00:19:33.291 "adrfam": "IPv4", 00:19:33.291 "traddr": "10.0.0.1", 00:19:33.291 "trsvcid": "37200" 00:19:33.291 }, 00:19:33.291 "auth": { 00:19:33.291 "state": "completed", 00:19:33.291 "digest": "sha384", 00:19:33.291 "dhgroup": "ffdhe3072" 00:19:33.291 } 00:19:33.291 } 00:19:33.291 ]' 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.291 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.549 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:33.549 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:34.484 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.742 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.308 00:19:35.309 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.309 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.309 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.567 { 00:19:35.567 "cntlid": 73, 00:19:35.567 "qid": 0, 00:19:35.567 "state": "enabled", 00:19:35.567 "thread": "nvmf_tgt_poll_group_000", 00:19:35.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.567 "listen_address": { 00:19:35.567 "trtype": "TCP", 00:19:35.567 "adrfam": "IPv4", 00:19:35.567 "traddr": "10.0.0.2", 00:19:35.567 "trsvcid": "4420" 00:19:35.567 }, 00:19:35.567 "peer_address": { 00:19:35.567 "trtype": "TCP", 00:19:35.567 "adrfam": "IPv4", 00:19:35.567 "traddr": "10.0.0.1", 00:19:35.567 "trsvcid": "37220" 00:19:35.567 }, 00:19:35.567 "auth": { 00:19:35.567 "state": "completed", 00:19:35.567 "digest": "sha384", 00:19:35.567 "dhgroup": "ffdhe4096" 00:19:35.567 } 00:19:35.567 } 00:19:35.567 ]' 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.567 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.135 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:36.135 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:37.069 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.327 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.586 00:19:37.586 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.586 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.586 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.152 { 00:19:38.152 "cntlid": 75, 00:19:38.152 "qid": 0, 00:19:38.152 "state": "enabled", 00:19:38.152 "thread": "nvmf_tgt_poll_group_000", 00:19:38.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.152 "listen_address": { 00:19:38.152 "trtype": "TCP", 00:19:38.152 "adrfam": "IPv4", 00:19:38.152 "traddr": "10.0.0.2", 00:19:38.152 "trsvcid": "4420" 00:19:38.152 }, 00:19:38.152 "peer_address": { 00:19:38.152 "trtype": "TCP", 00:19:38.152 "adrfam": "IPv4", 00:19:38.152 "traddr": "10.0.0.1", 00:19:38.152 "trsvcid": "37256" 00:19:38.152 }, 00:19:38.152 "auth": { 00:19:38.152 "state": "completed", 00:19:38.152 "digest": "sha384", 00:19:38.152 "dhgroup": "ffdhe4096" 00:19:38.152 } 00:19:38.152 } 00:19:38.152 ]' 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.152 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.410 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:38.410 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.344 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.602 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.603 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.169 00:19:40.169 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.169 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.170 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.428 { 00:19:40.428 "cntlid": 77, 00:19:40.428 "qid": 0, 00:19:40.428 "state": "enabled", 00:19:40.428 "thread": "nvmf_tgt_poll_group_000", 00:19:40.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.428 "listen_address": { 00:19:40.428 "trtype": "TCP", 00:19:40.428 "adrfam": "IPv4", 00:19:40.428 "traddr": "10.0.0.2", 00:19:40.428 "trsvcid": "4420" 00:19:40.428 }, 00:19:40.428 "peer_address": { 00:19:40.428 "trtype": "TCP", 00:19:40.428 "adrfam": "IPv4", 00:19:40.428 "traddr": "10.0.0.1", 00:19:40.428 "trsvcid": "37272" 00:19:40.428 }, 00:19:40.428 "auth": { 00:19:40.428 "state": "completed", 00:19:40.428 "digest": "sha384", 00:19:40.428 "dhgroup": "ffdhe4096" 00:19:40.428 } 00:19:40.428 } 00:19:40.428 ]' 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.428 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.994 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:40.995 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.931 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.189 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.446 00:19:42.704 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.704 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.704 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.963 { 00:19:42.963 "cntlid": 79, 00:19:42.963 "qid": 0, 00:19:42.963 "state": "enabled", 00:19:42.963 "thread": "nvmf_tgt_poll_group_000", 00:19:42.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.963 "listen_address": { 00:19:42.963 "trtype": "TCP", 00:19:42.963 "adrfam": "IPv4", 00:19:42.963 "traddr": "10.0.0.2", 00:19:42.963 "trsvcid": "4420" 00:19:42.963 }, 00:19:42.963 "peer_address": { 00:19:42.963 "trtype": "TCP", 00:19:42.963 "adrfam": "IPv4", 00:19:42.963 "traddr": "10.0.0.1", 00:19:42.963 "trsvcid": "50146" 00:19:42.963 }, 00:19:42.963 "auth": { 00:19:42.963 "state": "completed", 00:19:42.963 "digest": "sha384", 00:19:42.963 "dhgroup": "ffdhe4096" 00:19:42.963 } 00:19:42.963 } 00:19:42.963 ]' 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.963 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.221 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:43.221 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:44.155 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:44.413 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.671 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.237 00:19:45.237 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.237 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.237 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.505 { 00:19:45.505 "cntlid": 81, 00:19:45.505 "qid": 0, 00:19:45.505 "state": "enabled", 00:19:45.505 "thread": "nvmf_tgt_poll_group_000", 00:19:45.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.505 "listen_address": { 00:19:45.505 "trtype": "TCP", 00:19:45.505 "adrfam": "IPv4", 00:19:45.505 "traddr": "10.0.0.2", 00:19:45.505 "trsvcid": "4420" 00:19:45.505 }, 00:19:45.505 "peer_address": { 00:19:45.505 "trtype": "TCP", 00:19:45.505 "adrfam": "IPv4", 00:19:45.505 "traddr": "10.0.0.1", 00:19:45.505 "trsvcid": "50176" 00:19:45.505 }, 00:19:45.505 "auth": { 00:19:45.505 "state": "completed", 00:19:45.505 "digest": "sha384", 00:19:45.505 "dhgroup": "ffdhe6144" 00:19:45.505 } 00:19:45.505 } 00:19:45.505 ]' 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.505 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.764 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:45.764 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:46.697 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.265 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.831 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.831 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.089 { 00:19:48.089 "cntlid": 83, 00:19:48.089 "qid": 0, 00:19:48.089 "state": "enabled", 00:19:48.089 "thread": "nvmf_tgt_poll_group_000", 00:19:48.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.089 "listen_address": { 00:19:48.089 "trtype": "TCP", 00:19:48.089 "adrfam": "IPv4", 00:19:48.089 "traddr": "10.0.0.2", 00:19:48.089 "trsvcid": "4420" 00:19:48.089 }, 00:19:48.089 "peer_address": { 00:19:48.089 "trtype": "TCP", 00:19:48.089 "adrfam": "IPv4", 00:19:48.089 "traddr": "10.0.0.1", 00:19:48.089 "trsvcid": "50204" 00:19:48.089 }, 00:19:48.089 "auth": { 00:19:48.089 "state": "completed", 00:19:48.089 "digest": "sha384", 00:19:48.089 "dhgroup": "ffdhe6144" 00:19:48.089 } 00:19:48.089 } 00:19:48.089 ]' 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.089 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.347 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:48.347 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:49.280 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.280 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.280 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.280 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.280 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.280 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.281 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:49.281 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:49.538 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.539 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.104 00:19:50.104 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.104 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.104 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.362 { 00:19:50.362 "cntlid": 85, 00:19:50.362 "qid": 0, 00:19:50.362 "state": "enabled", 00:19:50.362 "thread": "nvmf_tgt_poll_group_000", 00:19:50.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.362 "listen_address": { 00:19:50.362 "trtype": "TCP", 00:19:50.362 "adrfam": "IPv4", 00:19:50.362 "traddr": "10.0.0.2", 00:19:50.362 "trsvcid": "4420" 00:19:50.362 }, 00:19:50.362 "peer_address": { 00:19:50.362 "trtype": "TCP", 00:19:50.362 "adrfam": "IPv4", 00:19:50.362 "traddr": "10.0.0.1", 00:19:50.362 "trsvcid": "50226" 00:19:50.362 }, 00:19:50.362 "auth": { 00:19:50.362 "state": "completed", 00:19:50.362 "digest": "sha384", 00:19:50.362 "dhgroup": "ffdhe6144" 00:19:50.362 } 00:19:50.362 } 00:19:50.362 ]' 00:19:50.362 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.620 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.878 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:50.878 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.812 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.070 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:52.070 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.070 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.070 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:52.071 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.071 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.071 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:52.071 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.071 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.071 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.071 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.071 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.071 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.637 00:19:52.637 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.637 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.637 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.896 { 00:19:52.896 "cntlid": 87, 00:19:52.896 "qid": 0, 00:19:52.896 "state": "enabled", 00:19:52.896 "thread": "nvmf_tgt_poll_group_000", 00:19:52.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.896 "listen_address": { 00:19:52.896 "trtype": "TCP", 00:19:52.896 "adrfam": "IPv4", 00:19:52.896 "traddr": "10.0.0.2", 00:19:52.896 "trsvcid": "4420" 00:19:52.896 }, 00:19:52.896 "peer_address": { 00:19:52.896 "trtype": "TCP", 00:19:52.896 "adrfam": "IPv4", 00:19:52.896 "traddr": "10.0.0.1", 00:19:52.896 "trsvcid": "47942" 00:19:52.896 }, 00:19:52.896 "auth": { 00:19:52.896 "state": "completed", 00:19:52.896 "digest": "sha384", 00:19:52.896 "dhgroup": "ffdhe6144" 00:19:52.896 } 00:19:52.896 } 00:19:52.896 ]' 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.896 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.154 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.154 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.154 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.154 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.154 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.412 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:53.412 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.349 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.606 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.863 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.863 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.863 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.863 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.797 00:19:55.797 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.797 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.797 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.055 { 00:19:56.055 "cntlid": 89, 00:19:56.055 "qid": 0, 00:19:56.055 "state": "enabled", 00:19:56.055 "thread": "nvmf_tgt_poll_group_000", 00:19:56.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.055 "listen_address": { 00:19:56.055 "trtype": "TCP", 00:19:56.055 "adrfam": "IPv4", 00:19:56.055 "traddr": "10.0.0.2", 00:19:56.055 "trsvcid": "4420" 00:19:56.055 }, 00:19:56.055 "peer_address": { 00:19:56.055 "trtype": "TCP", 00:19:56.055 "adrfam": "IPv4", 00:19:56.055 "traddr": "10.0.0.1", 00:19:56.055 "trsvcid": "47974" 00:19:56.055 }, 00:19:56.055 "auth": { 00:19:56.055 "state": "completed", 00:19:56.055 "digest": "sha384", 00:19:56.055 "dhgroup": "ffdhe8192" 00:19:56.055 } 00:19:56.055 } 00:19:56.055 ]' 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.055 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.056 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.056 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.056 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.056 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.056 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.313 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:56.313 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.245 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.503 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.761 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.761 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.761 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.761 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.329 00:19:58.587 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.587 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.587 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.845 { 00:19:58.845 "cntlid": 91, 00:19:58.845 "qid": 0, 00:19:58.845 "state": "enabled", 00:19:58.845 "thread": "nvmf_tgt_poll_group_000", 00:19:58.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.845 "listen_address": { 00:19:58.845 "trtype": "TCP", 00:19:58.845 "adrfam": "IPv4", 00:19:58.845 "traddr": "10.0.0.2", 00:19:58.845 "trsvcid": "4420" 00:19:58.845 }, 00:19:58.845 "peer_address": { 00:19:58.845 "trtype": "TCP", 00:19:58.845 "adrfam": "IPv4", 00:19:58.845 "traddr": "10.0.0.1", 00:19:58.845 "trsvcid": "48008" 00:19:58.845 }, 00:19:58.845 "auth": { 00:19:58.845 "state": "completed", 00:19:58.845 "digest": "sha384", 00:19:58.845 "dhgroup": "ffdhe8192" 00:19:58.845 } 00:19:58.845 } 00:19:58.845 ]' 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.845 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.103 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:19:59.103 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.538 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.472 00:20:01.472 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.472 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.472 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.730 { 00:20:01.730 "cntlid": 93, 00:20:01.730 "qid": 0, 00:20:01.730 "state": "enabled", 00:20:01.730 "thread": "nvmf_tgt_poll_group_000", 00:20:01.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.730 "listen_address": { 00:20:01.730 "trtype": "TCP", 00:20:01.730 "adrfam": "IPv4", 00:20:01.730 "traddr": "10.0.0.2", 00:20:01.730 "trsvcid": "4420" 00:20:01.730 }, 00:20:01.730 "peer_address": { 00:20:01.730 "trtype": "TCP", 00:20:01.730 "adrfam": "IPv4", 00:20:01.730 "traddr": "10.0.0.1", 00:20:01.730 "trsvcid": "36988" 00:20:01.730 }, 00:20:01.730 "auth": { 00:20:01.730 "state": "completed", 00:20:01.730 "digest": "sha384", 00:20:01.730 "dhgroup": "ffdhe8192" 00:20:01.730 } 00:20:01.730 } 00:20:01.730 ]' 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.730 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.988 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:01.988 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:02.922 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.181 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.115 00:20:04.115 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.115 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.115 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.386 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.386 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.386 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.386 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.386 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.386 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.386 { 00:20:04.386 "cntlid": 95, 00:20:04.386 "qid": 0, 00:20:04.386 "state": "enabled", 00:20:04.386 "thread": "nvmf_tgt_poll_group_000", 00:20:04.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.386 "listen_address": { 00:20:04.386 "trtype": "TCP", 00:20:04.386 "adrfam": "IPv4", 00:20:04.386 "traddr": "10.0.0.2", 00:20:04.386 "trsvcid": "4420" 00:20:04.386 }, 00:20:04.386 "peer_address": { 00:20:04.386 "trtype": "TCP", 00:20:04.386 "adrfam": "IPv4", 00:20:04.386 "traddr": "10.0.0.1", 00:20:04.386 "trsvcid": "37012" 00:20:04.386 }, 00:20:04.386 "auth": { 00:20:04.386 "state": "completed", 00:20:04.386 "digest": "sha384", 00:20:04.386 "dhgroup": "ffdhe8192" 00:20:04.386 } 00:20:04.386 } 00:20:04.386 ]' 00:20:04.387 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.647 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.905 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:04.905 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:05.839 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.097 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.355 00:20:06.613 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.613 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.613 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.871 { 00:20:06.871 "cntlid": 97, 00:20:06.871 "qid": 0, 00:20:06.871 "state": "enabled", 00:20:06.871 "thread": "nvmf_tgt_poll_group_000", 00:20:06.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.871 "listen_address": { 00:20:06.871 "trtype": "TCP", 00:20:06.871 "adrfam": "IPv4", 00:20:06.871 "traddr": "10.0.0.2", 00:20:06.871 "trsvcid": "4420" 00:20:06.871 }, 00:20:06.871 "peer_address": { 00:20:06.871 "trtype": "TCP", 00:20:06.871 "adrfam": "IPv4", 00:20:06.871 "traddr": "10.0.0.1", 00:20:06.871 "trsvcid": "37032" 00:20:06.871 }, 00:20:06.871 "auth": { 00:20:06.871 "state": "completed", 00:20:06.871 "digest": "sha512", 00:20:06.871 "dhgroup": "null" 00:20:06.871 } 00:20:06.871 } 00:20:06.871 ]' 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.871 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.129 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:07.129 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:08.062 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.320 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.578 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.578 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.578 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.578 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.836 00:20:08.836 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.836 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.836 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.093 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.093 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.093 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.093 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.093 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.093 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.093 { 00:20:09.093 "cntlid": 99, 00:20:09.093 "qid": 0, 00:20:09.093 "state": "enabled", 00:20:09.093 "thread": "nvmf_tgt_poll_group_000", 00:20:09.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.093 "listen_address": { 00:20:09.093 "trtype": "TCP", 00:20:09.093 "adrfam": "IPv4", 00:20:09.093 "traddr": "10.0.0.2", 00:20:09.093 "trsvcid": "4420" 00:20:09.093 }, 00:20:09.093 "peer_address": { 00:20:09.093 "trtype": "TCP", 00:20:09.093 "adrfam": "IPv4", 00:20:09.093 "traddr": "10.0.0.1", 00:20:09.093 "trsvcid": "37046" 00:20:09.093 }, 00:20:09.093 "auth": { 00:20:09.093 "state": "completed", 00:20:09.093 "digest": "sha512", 00:20:09.093 "dhgroup": "null" 00:20:09.093 } 00:20:09.093 } 00:20:09.093 ]' 00:20:09.094 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.094 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.094 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.094 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.094 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.094 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.094 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.094 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.351 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:09.351 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:10.285 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.285 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.285 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.285 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.543 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.543 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.543 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:10.543 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.801 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.059 00:20:11.059 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.059 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.059 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.317 { 00:20:11.317 "cntlid": 101, 00:20:11.317 "qid": 0, 00:20:11.317 "state": "enabled", 00:20:11.317 "thread": "nvmf_tgt_poll_group_000", 00:20:11.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.317 "listen_address": { 00:20:11.317 "trtype": "TCP", 00:20:11.317 "adrfam": "IPv4", 00:20:11.317 "traddr": "10.0.0.2", 00:20:11.317 "trsvcid": "4420" 00:20:11.317 }, 00:20:11.317 "peer_address": { 00:20:11.317 "trtype": "TCP", 00:20:11.317 "adrfam": "IPv4", 00:20:11.317 "traddr": "10.0.0.1", 00:20:11.317 "trsvcid": "48914" 00:20:11.317 }, 00:20:11.317 "auth": { 00:20:11.317 "state": "completed", 00:20:11.317 "digest": "sha512", 00:20:11.317 "dhgroup": "null" 00:20:11.317 } 00:20:11.317 } 00:20:11.317 ]' 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.317 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.318 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.318 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.318 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.575 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.575 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.575 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.833 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:11.833 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.771 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.029 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.594 00:20:13.594 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.594 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.594 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.852 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.852 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.852 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.852 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.852 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.852 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.852 { 00:20:13.852 "cntlid": 103, 00:20:13.852 "qid": 0, 00:20:13.852 "state": "enabled", 00:20:13.852 "thread": "nvmf_tgt_poll_group_000", 00:20:13.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.852 "listen_address": { 00:20:13.852 "trtype": "TCP", 00:20:13.852 "adrfam": "IPv4", 00:20:13.852 "traddr": "10.0.0.2", 00:20:13.852 "trsvcid": "4420" 00:20:13.852 }, 00:20:13.852 "peer_address": { 00:20:13.852 "trtype": "TCP", 00:20:13.852 "adrfam": "IPv4", 00:20:13.853 "traddr": "10.0.0.1", 00:20:13.853 "trsvcid": "48950" 00:20:13.853 }, 00:20:13.853 "auth": { 00:20:13.853 "state": "completed", 00:20:13.853 "digest": "sha512", 00:20:13.853 "dhgroup": "null" 00:20:13.853 } 00:20:13.853 } 00:20:13.853 ]' 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.853 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.110 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:14.110 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.039 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.297 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.555 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.555 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.555 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.555 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.813 00:20:15.813 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.813 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.813 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.071 { 00:20:16.071 "cntlid": 105, 00:20:16.071 "qid": 0, 00:20:16.071 "state": "enabled", 00:20:16.071 "thread": "nvmf_tgt_poll_group_000", 00:20:16.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.071 "listen_address": { 00:20:16.071 "trtype": "TCP", 00:20:16.071 "adrfam": "IPv4", 00:20:16.071 "traddr": "10.0.0.2", 00:20:16.071 "trsvcid": "4420" 00:20:16.071 }, 00:20:16.071 "peer_address": { 00:20:16.071 "trtype": "TCP", 00:20:16.071 "adrfam": "IPv4", 00:20:16.071 "traddr": "10.0.0.1", 00:20:16.071 "trsvcid": "48988" 00:20:16.071 }, 00:20:16.071 "auth": { 00:20:16.071 "state": "completed", 00:20:16.071 "digest": "sha512", 00:20:16.071 "dhgroup": "ffdhe2048" 00:20:16.071 } 00:20:16.071 } 00:20:16.071 ]' 00:20:16.071 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.071 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.071 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.071 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.071 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.328 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.328 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.328 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.587 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:16.587 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.520 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.778 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.036 00:20:18.036 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.036 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.036 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.294 { 00:20:18.294 "cntlid": 107, 00:20:18.294 "qid": 0, 00:20:18.294 "state": "enabled", 00:20:18.294 "thread": "nvmf_tgt_poll_group_000", 00:20:18.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.294 "listen_address": { 00:20:18.294 "trtype": "TCP", 00:20:18.294 "adrfam": "IPv4", 00:20:18.294 "traddr": "10.0.0.2", 00:20:18.294 "trsvcid": "4420" 00:20:18.294 }, 00:20:18.294 "peer_address": { 00:20:18.294 "trtype": "TCP", 00:20:18.294 "adrfam": "IPv4", 00:20:18.294 "traddr": "10.0.0.1", 00:20:18.294 "trsvcid": "49022" 00:20:18.294 }, 00:20:18.294 "auth": { 00:20:18.294 "state": "completed", 00:20:18.294 "digest": "sha512", 00:20:18.294 "dhgroup": "ffdhe2048" 00:20:18.294 } 00:20:18.294 } 00:20:18.294 ]' 00:20:18.294 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.550 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.806 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:18.806 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.740 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.998 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.999 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.999 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.999 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.999 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.999 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.999 09:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.565 00:20:20.565 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.565 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.565 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.823 { 00:20:20.823 "cntlid": 109, 00:20:20.823 "qid": 0, 00:20:20.823 "state": "enabled", 00:20:20.823 "thread": "nvmf_tgt_poll_group_000", 00:20:20.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.823 "listen_address": { 00:20:20.823 "trtype": "TCP", 00:20:20.823 "adrfam": "IPv4", 00:20:20.823 "traddr": "10.0.0.2", 00:20:20.823 "trsvcid": "4420" 00:20:20.823 }, 00:20:20.823 "peer_address": { 00:20:20.823 "trtype": "TCP", 00:20:20.823 "adrfam": "IPv4", 00:20:20.823 "traddr": "10.0.0.1", 00:20:20.823 "trsvcid": "50522" 00:20:20.823 }, 00:20:20.823 "auth": { 00:20:20.823 "state": "completed", 00:20:20.823 "digest": "sha512", 00:20:20.823 "dhgroup": "ffdhe2048" 00:20:20.823 } 00:20:20.823 } 00:20:20.823 ]' 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.823 09:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.388 09:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:21.388 09:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.323 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.580 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.581 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.581 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.581 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.838 00:20:22.838 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.838 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.838 09:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.096 { 00:20:23.096 "cntlid": 111, 00:20:23.096 "qid": 0, 00:20:23.096 "state": "enabled", 00:20:23.096 "thread": "nvmf_tgt_poll_group_000", 00:20:23.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.096 "listen_address": { 00:20:23.096 "trtype": "TCP", 00:20:23.096 "adrfam": "IPv4", 00:20:23.096 "traddr": "10.0.0.2", 00:20:23.096 "trsvcid": "4420" 00:20:23.096 }, 00:20:23.096 "peer_address": { 00:20:23.096 "trtype": "TCP", 00:20:23.096 "adrfam": "IPv4", 00:20:23.096 "traddr": "10.0.0.1", 00:20:23.096 "trsvcid": "50560" 00:20:23.096 }, 00:20:23.096 "auth": { 00:20:23.096 "state": "completed", 00:20:23.096 "digest": "sha512", 00:20:23.096 "dhgroup": "ffdhe2048" 00:20:23.096 } 00:20:23.096 } 00:20:23.096 ]' 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.096 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.354 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.354 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.354 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.612 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:23.612 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.544 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.802 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.368 00:20:25.368 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.368 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.368 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.368 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.625 { 00:20:25.625 "cntlid": 113, 00:20:25.625 "qid": 0, 00:20:25.625 "state": "enabled", 00:20:25.625 "thread": "nvmf_tgt_poll_group_000", 00:20:25.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.625 "listen_address": { 00:20:25.625 "trtype": "TCP", 00:20:25.625 "adrfam": "IPv4", 00:20:25.625 "traddr": "10.0.0.2", 00:20:25.625 "trsvcid": "4420" 00:20:25.625 }, 00:20:25.625 "peer_address": { 00:20:25.625 "trtype": "TCP", 00:20:25.625 "adrfam": "IPv4", 00:20:25.625 "traddr": "10.0.0.1", 00:20:25.625 "trsvcid": "50598" 00:20:25.625 }, 00:20:25.625 "auth": { 00:20:25.625 "state": "completed", 00:20:25.625 "digest": "sha512", 00:20:25.625 "dhgroup": "ffdhe3072" 00:20:25.625 } 00:20:25.625 } 00:20:25.625 ]' 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.625 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.626 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.626 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.626 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.883 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:25.883 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.817 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.383 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.640 00:20:27.640 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.640 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.641 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.898 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.898 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.898 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.898 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.898 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.898 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.898 { 00:20:27.898 "cntlid": 115, 00:20:27.898 "qid": 0, 00:20:27.898 "state": "enabled", 00:20:27.898 "thread": "nvmf_tgt_poll_group_000", 00:20:27.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.898 "listen_address": { 00:20:27.898 "trtype": "TCP", 00:20:27.898 "adrfam": "IPv4", 00:20:27.898 "traddr": "10.0.0.2", 00:20:27.898 "trsvcid": "4420" 00:20:27.898 }, 00:20:27.898 "peer_address": { 00:20:27.898 "trtype": "TCP", 00:20:27.898 "adrfam": "IPv4", 00:20:27.898 "traddr": "10.0.0.1", 00:20:27.898 "trsvcid": "50616" 00:20:27.898 }, 00:20:27.898 "auth": { 00:20:27.898 "state": "completed", 00:20:27.898 "digest": "sha512", 00:20:27.899 "dhgroup": "ffdhe3072" 00:20:27.899 } 00:20:27.899 } 00:20:27.899 ]' 00:20:27.899 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.899 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.899 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.899 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.899 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.156 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.156 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.156 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.415 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:28.415 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.349 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.607 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.882 00:20:29.882 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.882 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.882 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.180 { 00:20:30.180 "cntlid": 117, 00:20:30.180 "qid": 0, 00:20:30.180 "state": "enabled", 00:20:30.180 "thread": "nvmf_tgt_poll_group_000", 00:20:30.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.180 "listen_address": { 00:20:30.180 "trtype": "TCP", 00:20:30.180 "adrfam": "IPv4", 00:20:30.180 "traddr": "10.0.0.2", 00:20:30.180 "trsvcid": "4420" 00:20:30.180 }, 00:20:30.180 "peer_address": { 00:20:30.180 "trtype": "TCP", 00:20:30.180 "adrfam": "IPv4", 00:20:30.180 "traddr": "10.0.0.1", 00:20:30.180 "trsvcid": "50636" 00:20:30.180 }, 00:20:30.180 "auth": { 00:20:30.180 "state": "completed", 00:20:30.180 "digest": "sha512", 00:20:30.180 "dhgroup": "ffdhe3072" 00:20:30.180 } 00:20:30.180 } 00:20:30.180 ]' 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.180 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.439 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.439 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.439 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.697 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:30.697 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.630 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.889 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.455 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.455 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.455 { 00:20:32.455 "cntlid": 119, 00:20:32.455 "qid": 0, 00:20:32.455 "state": "enabled", 00:20:32.455 "thread": "nvmf_tgt_poll_group_000", 00:20:32.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.455 "listen_address": { 00:20:32.455 "trtype": "TCP", 00:20:32.455 "adrfam": "IPv4", 00:20:32.455 "traddr": "10.0.0.2", 00:20:32.455 "trsvcid": "4420" 00:20:32.455 }, 00:20:32.455 "peer_address": { 00:20:32.455 "trtype": "TCP", 00:20:32.455 "adrfam": "IPv4", 00:20:32.455 "traddr": "10.0.0.1", 00:20:32.455 "trsvcid": "41986" 00:20:32.455 }, 00:20:32.455 "auth": { 00:20:32.455 "state": "completed", 00:20:32.455 "digest": "sha512", 00:20:32.455 "dhgroup": "ffdhe3072" 00:20:32.455 } 00:20:32.455 } 00:20:32.455 ]' 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.713 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.971 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:32.971 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:33.903 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.161 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.726 00:20:34.726 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.726 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.726 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.984 { 00:20:34.984 "cntlid": 121, 00:20:34.984 "qid": 0, 00:20:34.984 "state": "enabled", 00:20:34.984 "thread": "nvmf_tgt_poll_group_000", 00:20:34.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.984 "listen_address": { 00:20:34.984 "trtype": "TCP", 00:20:34.984 "adrfam": "IPv4", 00:20:34.984 "traddr": "10.0.0.2", 00:20:34.984 "trsvcid": "4420" 00:20:34.984 }, 00:20:34.984 "peer_address": { 00:20:34.984 "trtype": "TCP", 00:20:34.984 "adrfam": "IPv4", 00:20:34.984 "traddr": "10.0.0.1", 00:20:34.984 "trsvcid": "42028" 00:20:34.984 }, 00:20:34.984 "auth": { 00:20:34.984 "state": "completed", 00:20:34.984 "digest": "sha512", 00:20:34.984 "dhgroup": "ffdhe4096" 00:20:34.984 } 00:20:34.984 } 00:20:34.984 ]' 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.984 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.549 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:35.549 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.481 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.739 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.997 00:20:36.997 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.997 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.997 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.255 { 00:20:37.255 "cntlid": 123, 00:20:37.255 "qid": 0, 00:20:37.255 "state": "enabled", 00:20:37.255 "thread": "nvmf_tgt_poll_group_000", 00:20:37.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.255 "listen_address": { 00:20:37.255 "trtype": "TCP", 00:20:37.255 "adrfam": "IPv4", 00:20:37.255 "traddr": "10.0.0.2", 00:20:37.255 "trsvcid": "4420" 00:20:37.255 }, 00:20:37.255 "peer_address": { 00:20:37.255 "trtype": "TCP", 00:20:37.255 "adrfam": "IPv4", 00:20:37.255 "traddr": "10.0.0.1", 00:20:37.255 "trsvcid": "42062" 00:20:37.255 }, 00:20:37.255 "auth": { 00:20:37.255 "state": "completed", 00:20:37.255 "digest": "sha512", 00:20:37.255 "dhgroup": "ffdhe4096" 00:20:37.255 } 00:20:37.255 } 00:20:37.255 ]' 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.255 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.513 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.513 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.513 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.513 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.513 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.770 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:37.770 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.703 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.960 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.525 00:20:39.525 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.525 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.525 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.783 { 00:20:39.783 "cntlid": 125, 00:20:39.783 "qid": 0, 00:20:39.783 "state": "enabled", 00:20:39.783 "thread": "nvmf_tgt_poll_group_000", 00:20:39.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.783 "listen_address": { 00:20:39.783 "trtype": "TCP", 00:20:39.783 "adrfam": "IPv4", 00:20:39.783 "traddr": "10.0.0.2", 00:20:39.783 "trsvcid": "4420" 00:20:39.783 }, 00:20:39.783 "peer_address": { 00:20:39.783 "trtype": "TCP", 00:20:39.783 "adrfam": "IPv4", 00:20:39.783 "traddr": "10.0.0.1", 00:20:39.783 "trsvcid": "42086" 00:20:39.783 }, 00:20:39.783 "auth": { 00:20:39.783 "state": "completed", 00:20:39.783 "digest": "sha512", 00:20:39.783 "dhgroup": "ffdhe4096" 00:20:39.783 } 00:20:39.783 } 00:20:39.783 ]' 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.783 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.041 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:40.041 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.414 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.980 00:20:41.980 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.980 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.980 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.238 { 00:20:42.238 "cntlid": 127, 00:20:42.238 "qid": 0, 00:20:42.238 "state": "enabled", 00:20:42.238 "thread": "nvmf_tgt_poll_group_000", 00:20:42.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.238 "listen_address": { 00:20:42.238 "trtype": "TCP", 00:20:42.238 "adrfam": "IPv4", 00:20:42.238 "traddr": "10.0.0.2", 00:20:42.238 "trsvcid": "4420" 00:20:42.238 }, 00:20:42.238 "peer_address": { 00:20:42.238 "trtype": "TCP", 00:20:42.238 "adrfam": "IPv4", 00:20:42.238 "traddr": "10.0.0.1", 00:20:42.238 "trsvcid": "33658" 00:20:42.238 }, 00:20:42.238 "auth": { 00:20:42.238 "state": "completed", 00:20:42.238 "digest": "sha512", 00:20:42.238 "dhgroup": "ffdhe4096" 00:20:42.238 } 00:20:42.238 } 00:20:42.238 ]' 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.238 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.496 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:42.496 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.430 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.690 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.948 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.948 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.948 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.948 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.514 00:20:44.514 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.514 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.514 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.771 { 00:20:44.771 "cntlid": 129, 00:20:44.771 "qid": 0, 00:20:44.771 "state": "enabled", 00:20:44.771 "thread": "nvmf_tgt_poll_group_000", 00:20:44.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.771 "listen_address": { 00:20:44.771 "trtype": "TCP", 00:20:44.771 "adrfam": "IPv4", 00:20:44.771 "traddr": "10.0.0.2", 00:20:44.771 "trsvcid": "4420" 00:20:44.771 }, 00:20:44.771 "peer_address": { 00:20:44.771 "trtype": "TCP", 00:20:44.771 "adrfam": "IPv4", 00:20:44.771 "traddr": "10.0.0.1", 00:20:44.771 "trsvcid": "33694" 00:20:44.771 }, 00:20:44.771 "auth": { 00:20:44.771 "state": "completed", 00:20:44.771 "digest": "sha512", 00:20:44.771 "dhgroup": "ffdhe6144" 00:20:44.771 } 00:20:44.771 } 00:20:44.771 ]' 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.771 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.029 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:45.029 09:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.961 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.219 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.784 00:20:46.784 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.784 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.784 09:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.350 { 00:20:47.350 "cntlid": 131, 00:20:47.350 "qid": 0, 00:20:47.350 "state": "enabled", 00:20:47.350 "thread": "nvmf_tgt_poll_group_000", 00:20:47.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.350 "listen_address": { 00:20:47.350 "trtype": "TCP", 00:20:47.350 "adrfam": "IPv4", 00:20:47.350 "traddr": "10.0.0.2", 00:20:47.350 "trsvcid": "4420" 00:20:47.350 }, 00:20:47.350 "peer_address": { 00:20:47.350 "trtype": "TCP", 00:20:47.350 "adrfam": "IPv4", 00:20:47.350 "traddr": "10.0.0.1", 00:20:47.350 "trsvcid": "33712" 00:20:47.350 }, 00:20:47.350 "auth": { 00:20:47.350 "state": "completed", 00:20:47.350 "digest": "sha512", 00:20:47.350 "dhgroup": "ffdhe6144" 00:20:47.350 } 00:20:47.350 } 00:20:47.350 ]' 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.350 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.608 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:47.608 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:48.541 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.800 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.058 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.059 09:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.625 00:20:49.625 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.625 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.625 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.883 { 00:20:49.883 "cntlid": 133, 00:20:49.883 "qid": 0, 00:20:49.883 "state": "enabled", 00:20:49.883 "thread": "nvmf_tgt_poll_group_000", 00:20:49.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.883 "listen_address": { 00:20:49.883 "trtype": "TCP", 00:20:49.883 "adrfam": "IPv4", 00:20:49.883 "traddr": "10.0.0.2", 00:20:49.883 "trsvcid": "4420" 00:20:49.883 }, 00:20:49.883 "peer_address": { 00:20:49.883 "trtype": "TCP", 00:20:49.883 "adrfam": "IPv4", 00:20:49.883 "traddr": "10.0.0.1", 00:20:49.883 "trsvcid": "33732" 00:20:49.883 }, 00:20:49.883 "auth": { 00:20:49.883 "state": "completed", 00:20:49.883 "digest": "sha512", 00:20:49.883 "dhgroup": "ffdhe6144" 00:20:49.883 } 00:20:49.883 } 00:20:49.883 ]' 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.883 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.141 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:50.141 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.074 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.638 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.203 00:20:52.203 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.203 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.203 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.461 { 00:20:52.461 "cntlid": 135, 00:20:52.461 "qid": 0, 00:20:52.461 "state": "enabled", 00:20:52.461 "thread": "nvmf_tgt_poll_group_000", 00:20:52.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.461 "listen_address": { 00:20:52.461 "trtype": "TCP", 00:20:52.461 "adrfam": "IPv4", 00:20:52.461 "traddr": "10.0.0.2", 00:20:52.461 "trsvcid": "4420" 00:20:52.461 }, 00:20:52.461 "peer_address": { 00:20:52.461 "trtype": "TCP", 00:20:52.461 "adrfam": "IPv4", 00:20:52.461 "traddr": "10.0.0.1", 00:20:52.461 "trsvcid": "60312" 00:20:52.461 }, 00:20:52.461 "auth": { 00:20:52.461 "state": "completed", 00:20:52.461 "digest": "sha512", 00:20:52.461 "dhgroup": "ffdhe6144" 00:20:52.461 } 00:20:52.461 } 00:20:52.461 ]' 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.461 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.720 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:52.720 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.653 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.911 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.844 00:20:54.844 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.844 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.844 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.102 { 00:20:55.102 "cntlid": 137, 00:20:55.102 "qid": 0, 00:20:55.102 "state": "enabled", 00:20:55.102 "thread": "nvmf_tgt_poll_group_000", 00:20:55.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.102 "listen_address": { 00:20:55.102 "trtype": "TCP", 00:20:55.102 "adrfam": "IPv4", 00:20:55.102 "traddr": "10.0.0.2", 00:20:55.102 "trsvcid": "4420" 00:20:55.102 }, 00:20:55.102 "peer_address": { 00:20:55.102 "trtype": "TCP", 00:20:55.102 "adrfam": "IPv4", 00:20:55.102 "traddr": "10.0.0.1", 00:20:55.102 "trsvcid": "60338" 00:20:55.102 }, 00:20:55.102 "auth": { 00:20:55.102 "state": "completed", 00:20:55.102 "digest": "sha512", 00:20:55.102 "dhgroup": "ffdhe8192" 00:20:55.102 } 00:20:55.102 } 00:20:55.102 ]' 00:20:55.102 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.360 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.619 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:55.619 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:56.554 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.813 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.748 00:20:57.748 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.748 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.748 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.006 { 00:20:58.006 "cntlid": 139, 00:20:58.006 "qid": 0, 00:20:58.006 "state": "enabled", 00:20:58.006 "thread": "nvmf_tgt_poll_group_000", 00:20:58.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.006 "listen_address": { 00:20:58.006 "trtype": "TCP", 00:20:58.006 "adrfam": "IPv4", 00:20:58.006 "traddr": "10.0.0.2", 00:20:58.006 "trsvcid": "4420" 00:20:58.006 }, 00:20:58.006 "peer_address": { 00:20:58.006 "trtype": "TCP", 00:20:58.006 "adrfam": "IPv4", 00:20:58.006 "traddr": "10.0.0.1", 00:20:58.006 "trsvcid": "60364" 00:20:58.006 }, 00:20:58.006 "auth": { 00:20:58.006 "state": "completed", 00:20:58.006 "digest": "sha512", 00:20:58.006 "dhgroup": "ffdhe8192" 00:20:58.006 } 00:20:58.006 } 00:20:58.006 ]' 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.006 09:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.264 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.264 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.264 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.522 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:58.522 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: --dhchap-ctrl-secret DHHC-1:02:ZmNhNzQ3MzYyZGVlMjAzOWMwZTFhOGJlN2IyNzI2ODEwNDZjMTU5YzdiYTIwYTljuRKexg==: 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.456 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.715 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.695 00:21:00.695 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.695 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.695 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.984 { 00:21:00.984 "cntlid": 141, 00:21:00.984 "qid": 0, 00:21:00.984 "state": "enabled", 00:21:00.984 "thread": "nvmf_tgt_poll_group_000", 00:21:00.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.984 "listen_address": { 00:21:00.984 "trtype": "TCP", 00:21:00.984 "adrfam": "IPv4", 00:21:00.984 "traddr": "10.0.0.2", 00:21:00.984 "trsvcid": "4420" 00:21:00.984 }, 00:21:00.984 "peer_address": { 00:21:00.984 "trtype": "TCP", 00:21:00.984 "adrfam": "IPv4", 00:21:00.984 "traddr": "10.0.0.1", 00:21:00.984 "trsvcid": "60398" 00:21:00.984 }, 00:21:00.984 "auth": { 00:21:00.984 "state": "completed", 00:21:00.984 "digest": "sha512", 00:21:00.984 "dhgroup": "ffdhe8192" 00:21:00.984 } 00:21:00.984 } 00:21:00.984 ]' 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.984 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.242 09:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:21:01.242 09:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:01:MTAwMmE1NzE3MjQyYzAxNjZlZTg0MjJlYzI0ZjUyODYKYUfB: 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.176 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.435 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.369 00:21:03.369 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.369 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.369 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.935 { 00:21:03.935 "cntlid": 143, 00:21:03.935 "qid": 0, 00:21:03.935 "state": "enabled", 00:21:03.935 "thread": "nvmf_tgt_poll_group_000", 00:21:03.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.935 "listen_address": { 00:21:03.935 "trtype": "TCP", 00:21:03.935 "adrfam": "IPv4", 00:21:03.935 "traddr": "10.0.0.2", 00:21:03.935 "trsvcid": "4420" 00:21:03.935 }, 00:21:03.935 "peer_address": { 00:21:03.935 "trtype": "TCP", 00:21:03.935 "adrfam": "IPv4", 00:21:03.935 "traddr": "10.0.0.1", 00:21:03.935 "trsvcid": "60218" 00:21:03.935 }, 00:21:03.935 "auth": { 00:21:03.935 "state": "completed", 00:21:03.935 "digest": "sha512", 00:21:03.935 "dhgroup": "ffdhe8192" 00:21:03.935 } 00:21:03.935 } 00:21:03.935 ]' 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.935 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.193 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:21:04.193 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.127 09:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.385 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.318 00:21:06.318 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.318 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.318 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.576 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.576 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.576 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.576 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.576 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.576 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.576 { 00:21:06.576 "cntlid": 145, 00:21:06.576 "qid": 0, 00:21:06.576 "state": "enabled", 00:21:06.576 "thread": "nvmf_tgt_poll_group_000", 00:21:06.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.576 "listen_address": { 00:21:06.576 "trtype": "TCP", 00:21:06.576 "adrfam": "IPv4", 00:21:06.576 "traddr": "10.0.0.2", 00:21:06.576 "trsvcid": "4420" 00:21:06.576 }, 00:21:06.576 "peer_address": { 00:21:06.576 "trtype": "TCP", 00:21:06.576 "adrfam": "IPv4", 00:21:06.576 "traddr": "10.0.0.1", 00:21:06.576 "trsvcid": "60230" 00:21:06.576 }, 00:21:06.576 "auth": { 00:21:06.576 "state": "completed", 00:21:06.576 "digest": "sha512", 00:21:06.576 "dhgroup": "ffdhe8192" 00:21:06.576 } 00:21:06.576 } 00:21:06.576 ]' 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.577 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.835 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:21:06.835 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NDNhZDkxMGU1Mzk5N2JmNjFhMWRkNzA4ZjAzMGE4ZmExZjQ4YzgxMmQwZTdlYjgylIkWIw==: --dhchap-ctrl-secret DHHC-1:03:NmQwNmVhM2YyYjZkNWQ3NmQzYTI2OTY4YzFkZTE1OWU0ZDhkNTZlZmQxODYxZGQxODEyMjdlOTNmNGNiM2YwNX5NDuE=: 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:07.769 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:08.703 request: 00:21:08.703 { 00:21:08.703 "name": "nvme0", 00:21:08.703 "trtype": "tcp", 00:21:08.703 "traddr": "10.0.0.2", 00:21:08.703 "adrfam": "ipv4", 00:21:08.703 "trsvcid": "4420", 00:21:08.703 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:08.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.703 "prchk_reftag": false, 00:21:08.703 "prchk_guard": false, 00:21:08.703 "hdgst": false, 00:21:08.703 "ddgst": false, 00:21:08.703 "dhchap_key": "key2", 00:21:08.703 "allow_unrecognized_csi": false, 00:21:08.703 "method": "bdev_nvme_attach_controller", 00:21:08.703 "req_id": 1 00:21:08.703 } 00:21:08.703 Got JSON-RPC error response 00:21:08.703 response: 00:21:08.703 { 00:21:08.703 "code": -5, 00:21:08.703 "message": "Input/output error" 00:21:08.703 } 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.703 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:09.636 request: 00:21:09.636 { 00:21:09.636 "name": "nvme0", 00:21:09.636 "trtype": "tcp", 00:21:09.636 "traddr": "10.0.0.2", 00:21:09.636 "adrfam": "ipv4", 00:21:09.636 "trsvcid": "4420", 00:21:09.636 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:09.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.636 "prchk_reftag": false, 00:21:09.636 "prchk_guard": false, 00:21:09.636 "hdgst": false, 00:21:09.636 "ddgst": false, 00:21:09.636 "dhchap_key": "key1", 00:21:09.636 "dhchap_ctrlr_key": "ckey2", 00:21:09.636 "allow_unrecognized_csi": false, 00:21:09.636 "method": "bdev_nvme_attach_controller", 00:21:09.636 "req_id": 1 00:21:09.636 } 00:21:09.636 Got JSON-RPC error response 00:21:09.636 response: 00:21:09.636 { 00:21:09.636 "code": -5, 00:21:09.636 "message": "Input/output error" 00:21:09.636 } 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.636 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.570 request: 00:21:10.570 { 00:21:10.570 "name": "nvme0", 00:21:10.570 "trtype": "tcp", 00:21:10.570 "traddr": "10.0.0.2", 00:21:10.570 "adrfam": "ipv4", 00:21:10.570 "trsvcid": "4420", 00:21:10.570 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:10.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.570 "prchk_reftag": false, 00:21:10.570 "prchk_guard": false, 00:21:10.570 "hdgst": false, 00:21:10.570 "ddgst": false, 00:21:10.570 "dhchap_key": "key1", 00:21:10.570 "dhchap_ctrlr_key": "ckey1", 00:21:10.570 "allow_unrecognized_csi": false, 00:21:10.570 "method": "bdev_nvme_attach_controller", 00:21:10.570 "req_id": 1 00:21:10.570 } 00:21:10.570 Got JSON-RPC error response 00:21:10.570 response: 00:21:10.570 { 00:21:10.570 "code": -5, 00:21:10.570 "message": "Input/output error" 00:21:10.570 } 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2962308 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2962308 ']' 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2962308 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962308 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962308' 00:21:10.570 killing process with pid 2962308 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2962308 00:21:10.570 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2962308 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2986200 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2986200 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986200 ']' 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.944 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2986200 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2986200 ']' 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.879 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.137 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.137 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:13.137 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:13.137 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.137 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 null0 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QNM 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.3ps ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3ps 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UkQ 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ykB ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ykB 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Xj6 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Eh3 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eh3 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Fig 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.704 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.078 nvme0n1 00:21:15.078 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.078 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.078 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.644 { 00:21:15.644 "cntlid": 1, 00:21:15.644 "qid": 0, 00:21:15.644 "state": "enabled", 00:21:15.644 "thread": "nvmf_tgt_poll_group_000", 00:21:15.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.644 "listen_address": { 00:21:15.644 "trtype": "TCP", 00:21:15.644 "adrfam": "IPv4", 00:21:15.644 "traddr": "10.0.0.2", 00:21:15.644 "trsvcid": "4420" 00:21:15.644 }, 00:21:15.644 "peer_address": { 00:21:15.644 "trtype": "TCP", 00:21:15.644 "adrfam": "IPv4", 00:21:15.644 "traddr": "10.0.0.1", 00:21:15.644 "trsvcid": "52146" 00:21:15.644 }, 00:21:15.644 "auth": { 00:21:15.644 "state": "completed", 00:21:15.644 "digest": "sha512", 00:21:15.644 "dhgroup": "ffdhe8192" 00:21:15.644 } 00:21:15.644 } 00:21:15.644 ]' 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.644 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.902 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:21:15.903 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:21:16.835 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.835 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.835 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.835 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:17.093 09:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.351 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.609 request: 00:21:17.609 { 00:21:17.609 "name": "nvme0", 00:21:17.609 "trtype": "tcp", 00:21:17.609 "traddr": "10.0.0.2", 00:21:17.609 "adrfam": "ipv4", 00:21:17.609 "trsvcid": "4420", 00:21:17.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:17.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.609 "prchk_reftag": false, 00:21:17.609 "prchk_guard": false, 00:21:17.609 "hdgst": false, 00:21:17.609 "ddgst": false, 00:21:17.609 "dhchap_key": "key3", 00:21:17.609 "allow_unrecognized_csi": false, 00:21:17.609 "method": "bdev_nvme_attach_controller", 00:21:17.609 "req_id": 1 00:21:17.609 } 00:21:17.609 Got JSON-RPC error response 00:21:17.609 response: 00:21:17.609 { 00:21:17.609 "code": -5, 00:21:17.609 "message": "Input/output error" 00:21:17.609 } 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:17.609 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.868 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.127 request: 00:21:18.127 { 00:21:18.127 "name": "nvme0", 00:21:18.127 "trtype": "tcp", 00:21:18.127 "traddr": "10.0.0.2", 00:21:18.127 "adrfam": "ipv4", 00:21:18.127 "trsvcid": "4420", 00:21:18.127 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.127 "prchk_reftag": false, 00:21:18.127 "prchk_guard": false, 00:21:18.127 "hdgst": false, 00:21:18.127 "ddgst": false, 00:21:18.127 "dhchap_key": "key3", 00:21:18.127 "allow_unrecognized_csi": false, 00:21:18.127 "method": "bdev_nvme_attach_controller", 00:21:18.127 "req_id": 1 00:21:18.127 } 00:21:18.127 Got JSON-RPC error response 00:21:18.127 response: 00:21:18.127 { 00:21:18.127 "code": -5, 00:21:18.127 "message": "Input/output error" 00:21:18.127 } 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.127 09:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.386 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.952 request: 00:21:18.952 { 00:21:18.952 "name": "nvme0", 00:21:18.952 "trtype": "tcp", 00:21:18.952 "traddr": "10.0.0.2", 00:21:18.952 "adrfam": "ipv4", 00:21:18.952 "trsvcid": "4420", 00:21:18.952 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.952 "prchk_reftag": false, 00:21:18.952 "prchk_guard": false, 00:21:18.952 "hdgst": false, 00:21:18.952 "ddgst": false, 00:21:18.952 "dhchap_key": "key0", 00:21:18.952 "dhchap_ctrlr_key": "key1", 00:21:18.952 "allow_unrecognized_csi": false, 00:21:18.952 "method": "bdev_nvme_attach_controller", 00:21:18.952 "req_id": 1 00:21:18.952 } 00:21:18.952 Got JSON-RPC error response 00:21:18.952 response: 00:21:18.952 { 00:21:18.952 "code": -5, 00:21:18.952 "message": "Input/output error" 00:21:18.952 } 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:18.952 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:19.210 nvme0n1 00:21:19.210 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:19.210 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:19.210 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.467 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.467 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.467 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:20.033 09:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:21.405 nvme0n1 00:21:21.405 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:21.405 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.405 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.663 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:21.921 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.921 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:21:21.921 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: --dhchap-ctrl-secret DHHC-1:03:OWY3YzFmODRkOGI4YjFlZDA2ZTY2ZWI3Y2M1ZmY0YmE2MTA4YjA0ZTYyZjRiOWQ5ZDhjMjE1NmZjZDczYjQxNQ5mL7E=: 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.854 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:23.112 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:24.046 request: 00:21:24.046 { 00:21:24.046 "name": "nvme0", 00:21:24.046 "trtype": "tcp", 00:21:24.046 "traddr": "10.0.0.2", 00:21:24.046 "adrfam": "ipv4", 00:21:24.046 "trsvcid": "4420", 00:21:24.046 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:24.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.046 "prchk_reftag": false, 00:21:24.046 "prchk_guard": false, 00:21:24.046 "hdgst": false, 00:21:24.046 "ddgst": false, 00:21:24.046 "dhchap_key": "key1", 00:21:24.046 "allow_unrecognized_csi": false, 00:21:24.046 "method": "bdev_nvme_attach_controller", 00:21:24.046 "req_id": 1 00:21:24.046 } 00:21:24.046 Got JSON-RPC error response 00:21:24.046 response: 00:21:24.046 { 00:21:24.046 "code": -5, 00:21:24.046 "message": "Input/output error" 00:21:24.046 } 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:24.046 09:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:25.418 nvme0n1 00:21:25.418 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:25.418 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:25.418 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.984 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.984 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.984 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:26.242 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:26.500 nvme0n1 00:21:26.500 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:26.500 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:26.500 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.758 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.758 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.758 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: '' 2s 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: ]] 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTQ2NDI4ODQwZTBhYjFiODhkZDYyNTM2NzJlMTBkZDid1XeT: 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:27.016 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:29.544 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.545 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: 2s 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: ]] 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWU0NDMwZDJiOTU1OTQ5OTEwNTNjNDg1ZTc1YWY0NzA0MTA2OThkYzBlNTUzZGQ5V5MaRg==: 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:29.545 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:31.438 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:32.806 nvme0n1 00:21:32.806 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:32.806 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.806 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.806 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.806 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:32.806 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:33.769 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:33.769 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:33.769 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:34.061 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:34.320 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:34.320 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:34.320 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:34.578 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:35.512 request: 00:21:35.512 { 00:21:35.512 "name": "nvme0", 00:21:35.512 "dhchap_key": "key1", 00:21:35.512 "dhchap_ctrlr_key": "key3", 00:21:35.512 "method": "bdev_nvme_set_keys", 00:21:35.512 "req_id": 1 00:21:35.512 } 00:21:35.512 Got JSON-RPC error response 00:21:35.512 response: 00:21:35.512 { 00:21:35.512 "code": -13, 00:21:35.512 "message": "Permission denied" 00:21:35.512 } 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:35.512 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.770 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:35.770 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:36.704 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:36.704 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:36.704 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:36.962 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:38.335 nvme0n1 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:38.593 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:39.527 request: 00:21:39.527 { 00:21:39.527 "name": "nvme0", 00:21:39.527 "dhchap_key": "key2", 00:21:39.527 "dhchap_ctrlr_key": "key0", 00:21:39.527 "method": "bdev_nvme_set_keys", 00:21:39.527 "req_id": 1 00:21:39.527 } 00:21:39.527 Got JSON-RPC error response 00:21:39.527 response: 00:21:39.527 { 00:21:39.527 "code": -13, 00:21:39.527 "message": "Permission denied" 00:21:39.527 } 00:21:39.527 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:39.527 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.527 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.527 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.527 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:39.527 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.528 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:39.786 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:39.786 09:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:40.720 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:40.720 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:40.720 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2962711 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2962711 ']' 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2962711 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962711 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962711' 00:21:40.978 killing process with pid 2962711 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2962711 00:21:40.978 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2962711 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.506 rmmod nvme_tcp 00:21:43.506 rmmod nvme_fabrics 00:21:43.506 rmmod nvme_keyring 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2986200 ']' 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2986200 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2986200 ']' 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2986200 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986200 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986200' 00:21:43.506 killing process with pid 2986200 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2986200 00:21:43.506 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2986200 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.441 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.344 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.QNM /tmp/spdk.key-sha256.UkQ /tmp/spdk.key-sha384.Xj6 /tmp/spdk.key-sha512.Fig /tmp/spdk.key-sha512.3ps /tmp/spdk.key-sha384.ykB /tmp/spdk.key-sha256.Eh3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:46.345 00:21:46.345 real 3m45.927s 00:21:46.345 user 8m44.310s 00:21:46.345 sys 0m27.986s 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.345 ************************************ 00:21:46.345 END TEST nvmf_auth_target 00:21:46.345 ************************************ 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.345 ************************************ 00:21:46.345 START TEST nvmf_bdevio_no_huge 00:21:46.345 ************************************ 00:21:46.345 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:46.604 * Looking for test storage... 00:21:46.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.604 --rc genhtml_branch_coverage=1 00:21:46.604 --rc genhtml_function_coverage=1 00:21:46.604 --rc genhtml_legend=1 00:21:46.604 --rc geninfo_all_blocks=1 00:21:46.604 --rc geninfo_unexecuted_blocks=1 00:21:46.604 00:21:46.604 ' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.604 --rc genhtml_branch_coverage=1 00:21:46.604 --rc genhtml_function_coverage=1 00:21:46.604 --rc genhtml_legend=1 00:21:46.604 --rc geninfo_all_blocks=1 00:21:46.604 --rc geninfo_unexecuted_blocks=1 00:21:46.604 00:21:46.604 ' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.604 --rc genhtml_branch_coverage=1 00:21:46.604 --rc genhtml_function_coverage=1 00:21:46.604 --rc genhtml_legend=1 00:21:46.604 --rc geninfo_all_blocks=1 00:21:46.604 --rc geninfo_unexecuted_blocks=1 00:21:46.604 00:21:46.604 ' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.604 --rc genhtml_branch_coverage=1 00:21:46.604 --rc genhtml_function_coverage=1 00:21:46.604 --rc genhtml_legend=1 00:21:46.604 --rc geninfo_all_blocks=1 00:21:46.604 --rc geninfo_unexecuted_blocks=1 00:21:46.604 00:21:46.604 ' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.604 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.605 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.131 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:21:49.132 00:21:49.132 --- 10.0.0.2 ping statistics --- 00:21:49.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.132 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:49.132 00:21:49.132 --- 10.0.0.1 ping statistics --- 00:21:49.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.132 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2992096 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2992096 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2992096 ']' 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.132 09:21:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.132 [2024-11-17 09:21:53.800876] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:49.132 [2024-11-17 09:21:53.801034] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:49.132 [2024-11-17 09:21:53.984283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.132 [2024-11-17 09:21:54.141385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.132 [2024-11-17 09:21:54.141471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.132 [2024-11-17 09:21:54.141497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.133 [2024-11-17 09:21:54.141521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.133 [2024-11-17 09:21:54.141541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.391 [2024-11-17 09:21:54.143658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.391 [2024-11-17 09:21:54.143716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:49.391 [2024-11-17 09:21:54.143768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.391 [2024-11-17 09:21:54.143774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 [2024-11-17 09:21:54.827328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 Malloc0 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 [2024-11-17 09:21:54.918227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.957 { 00:21:49.957 "params": { 00:21:49.957 "name": "Nvme$subsystem", 00:21:49.957 "trtype": "$TEST_TRANSPORT", 00:21:49.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.957 "adrfam": "ipv4", 00:21:49.957 "trsvcid": "$NVMF_PORT", 00:21:49.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.957 "hdgst": ${hdgst:-false}, 00:21:49.957 "ddgst": ${ddgst:-false} 00:21:49.957 }, 00:21:49.957 "method": "bdev_nvme_attach_controller" 00:21:49.957 } 00:21:49.957 EOF 00:21:49.957 )") 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:49.957 09:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:49.957 "params": { 00:21:49.957 "name": "Nvme1", 00:21:49.957 "trtype": "tcp", 00:21:49.957 "traddr": "10.0.0.2", 00:21:49.957 "adrfam": "ipv4", 00:21:49.957 "trsvcid": "4420", 00:21:49.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.957 "hdgst": false, 00:21:49.957 "ddgst": false 00:21:49.957 }, 00:21:49.957 "method": "bdev_nvme_attach_controller" 00:21:49.957 }' 00:21:50.216 [2024-11-17 09:21:55.007827] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:50.216 [2024-11-17 09:21:55.007961] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2992252 ] 00:21:50.216 [2024-11-17 09:21:55.161442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:50.474 [2024-11-17 09:21:55.304528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.474 [2024-11-17 09:21:55.304539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.474 [2024-11-17 09:21:55.304550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.041 I/O targets: 00:21:51.041 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:51.041 00:21:51.041 00:21:51.041 CUnit - A unit testing framework for C - Version 2.1-3 00:21:51.041 http://cunit.sourceforge.net/ 00:21:51.041 00:21:51.041 00:21:51.041 Suite: bdevio tests on: Nvme1n1 00:21:51.041 Test: blockdev write read block ...passed 00:21:51.041 Test: blockdev write zeroes read block ...passed 00:21:51.041 Test: blockdev write zeroes read no split ...passed 00:21:51.041 Test: blockdev write zeroes read split ...passed 00:21:51.041 Test: blockdev write zeroes read split partial ...passed 00:21:51.041 Test: blockdev reset ...[2024-11-17 09:21:55.952351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:51.041 [2024-11-17 09:21:55.952542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:21:51.041 [2024-11-17 09:21:55.972168] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:51.041 passed 00:21:51.041 Test: blockdev write read 8 blocks ...passed 00:21:51.041 Test: blockdev write read size > 128k ...passed 00:21:51.041 Test: blockdev write read invalid size ...passed 00:21:51.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.041 Test: blockdev write read max offset ...passed 00:21:51.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.300 Test: blockdev writev readv 8 blocks ...passed 00:21:51.300 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.300 Test: blockdev writev readv block ...passed 00:21:51.300 Test: blockdev writev readv size > 128k ...passed 00:21:51.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.300 Test: blockdev comparev and writev ...[2024-11-17 09:21:56.189921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.190008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.190053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.190082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.190552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.190585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.190619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.190650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.191130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.191164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.191204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.191229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.191731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.191764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.191797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.300 [2024-11-17 09:21:56.191821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:51.300 passed 00:21:51.300 Test: blockdev nvme passthru rw ...passed 00:21:51.300 Test: blockdev nvme passthru vendor specific ...[2024-11-17 09:21:56.274793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.300 [2024-11-17 09:21:56.274849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.275107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.300 [2024-11-17 09:21:56.275139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.275328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.300 [2024-11-17 09:21:56.275360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:51.300 [2024-11-17 09:21:56.275569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.300 [2024-11-17 09:21:56.275601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:51.300 passed 00:21:51.300 Test: blockdev nvme admin passthru ...passed 00:21:51.558 Test: blockdev copy ...passed 00:21:51.558 00:21:51.558 Run Summary: Type Total Ran Passed Failed Inactive 00:21:51.558 suites 1 1 n/a 0 0 00:21:51.558 tests 23 23 23 0 0 00:21:51.558 asserts 152 152 152 0 n/a 00:21:51.558 00:21:51.558 Elapsed time = 1.156 seconds 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.125 rmmod nvme_tcp 00:21:52.125 rmmod nvme_fabrics 00:21:52.125 rmmod nvme_keyring 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2992096 ']' 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2992096 00:21:52.125 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2992096 ']' 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2992096 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992096 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992096' 00:21:52.126 killing process with pid 2992096 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2992096 00:21:52.126 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2992096 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.061 09:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.595 09:21:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.595 00:21:55.595 real 0m8.640s 00:21:55.595 user 0m19.358s 00:21:55.595 sys 0m2.912s 00:21:55.595 09:21:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.595 09:21:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:55.595 ************************************ 00:21:55.595 END TEST nvmf_bdevio_no_huge 00:21:55.595 ************************************ 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:55.595 ************************************ 00:21:55.595 START TEST nvmf_tls 00:21:55.595 ************************************ 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:55.595 * Looking for test storage... 00:21:55.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:55.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.595 --rc genhtml_branch_coverage=1 00:21:55.595 --rc genhtml_function_coverage=1 00:21:55.595 --rc genhtml_legend=1 00:21:55.595 --rc geninfo_all_blocks=1 00:21:55.595 --rc geninfo_unexecuted_blocks=1 00:21:55.595 00:21:55.595 ' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:55.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.595 --rc genhtml_branch_coverage=1 00:21:55.595 --rc genhtml_function_coverage=1 00:21:55.595 --rc genhtml_legend=1 00:21:55.595 --rc geninfo_all_blocks=1 00:21:55.595 --rc geninfo_unexecuted_blocks=1 00:21:55.595 00:21:55.595 ' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:55.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.595 --rc genhtml_branch_coverage=1 00:21:55.595 --rc genhtml_function_coverage=1 00:21:55.595 --rc genhtml_legend=1 00:21:55.595 --rc geninfo_all_blocks=1 00:21:55.595 --rc geninfo_unexecuted_blocks=1 00:21:55.595 00:21:55.595 ' 00:21:55.595 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:55.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.596 --rc genhtml_branch_coverage=1 00:21:55.596 --rc genhtml_function_coverage=1 00:21:55.596 --rc genhtml_legend=1 00:21:55.596 --rc geninfo_all_blocks=1 00:21:55.596 --rc geninfo_unexecuted_blocks=1 00:21:55.596 00:21:55.596 ' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.596 09:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:57.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:57.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:57.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:57.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.497 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:21:57.498 00:21:57.498 --- 10.0.0.2 ping statistics --- 00:21:57.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.498 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:21:57.498 00:21:57.498 --- 10.0.0.1 ping statistics --- 00:21:57.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.498 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2994464 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2994464 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2994464 ']' 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.498 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.498 [2024-11-17 09:22:02.423626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:57.498 [2024-11-17 09:22:02.423772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.756 [2024-11-17 09:22:02.582819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.756 [2024-11-17 09:22:02.706653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.756 [2024-11-17 09:22:02.706756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.756 [2024-11-17 09:22:02.706779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.756 [2024-11-17 09:22:02.706799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.756 [2024-11-17 09:22:02.706815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.756 [2024-11-17 09:22:02.708217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:58.690 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:58.948 true 00:21:58.948 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:58.948 09:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:59.206 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:59.206 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:59.206 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:59.464 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.464 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:59.722 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:59.722 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:59.722 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:59.980 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.980 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:00.238 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:00.238 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:00.238 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.238 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:00.496 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:00.496 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:00.496 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:00.754 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.754 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:01.012 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:01.012 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:01.012 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:01.270 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.270 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.58BPkqZ8HK 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.cmCuwfiIJG 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.58BPkqZ8HK 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.cmCuwfiIJG 00:22:01.836 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.094 09:22:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:02.660 09:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.58BPkqZ8HK 00:22:02.660 09:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.58BPkqZ8HK 00:22:02.660 09:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.918 [2024-11-17 09:22:07.877834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.918 09:22:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.175 09:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.433 [2024-11-17 09:22:08.407290] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.433 [2024-11-17 09:22:08.407649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.433 09:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.999 malloc0 00:22:03.999 09:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:03.999 09:22:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.58BPkqZ8HK 00:22:04.256 09:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.514 09:22:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.58BPkqZ8HK 00:22:16.811 Initializing NVMe Controllers 00:22:16.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.811 Initialization complete. Launching workers. 00:22:16.811 ======================================================== 00:22:16.811 Latency(us) 00:22:16.811 Device Information : IOPS MiB/s Average min max 00:22:16.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5598.82 21.87 11436.18 2015.68 13543.60 00:22:16.811 ======================================================== 00:22:16.811 Total : 5598.82 21.87 11436.18 2015.68 13543.60 00:22:16.811 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.58BPkqZ8HK 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.58BPkqZ8HK 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2996615 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2996615 /var/tmp/bdevperf.sock 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2996615 ']' 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.811 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.811 [2024-11-17 09:22:19.847736] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:16.811 [2024-11-17 09:22:19.847874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996615 ] 00:22:16.811 [2024-11-17 09:22:19.978316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.811 [2024-11-17 09:22:20.106219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.811 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.811 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:16.811 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.58BPkqZ8HK 00:22:16.811 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:16.811 [2024-11-17 09:22:21.436416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.811 TLSTESTn1 00:22:16.811 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:16.811 Running I/O for 10 seconds... 00:22:18.678 2538.00 IOPS, 9.91 MiB/s [2024-11-17T08:22:25.068Z] 2586.00 IOPS, 10.10 MiB/s [2024-11-17T08:22:26.002Z] 2594.67 IOPS, 10.14 MiB/s [2024-11-17T08:22:26.937Z] 2604.75 IOPS, 10.17 MiB/s [2024-11-17T08:22:27.871Z] 2607.20 IOPS, 10.18 MiB/s [2024-11-17T08:22:28.804Z] 2612.17 IOPS, 10.20 MiB/s [2024-11-17T08:22:29.739Z] 2612.29 IOPS, 10.20 MiB/s [2024-11-17T08:22:31.113Z] 2614.50 IOPS, 10.21 MiB/s [2024-11-17T08:22:32.046Z] 2613.67 IOPS, 10.21 MiB/s [2024-11-17T08:22:32.046Z] 2613.70 IOPS, 10.21 MiB/s 00:22:27.033 Latency(us) 00:22:27.033 [2024-11-17T08:22:32.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.033 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.033 Verification LBA range: start 0x0 length 0x2000 00:22:27.033 TLSTESTn1 : 10.04 2617.00 10.22 0.00 0.00 48807.11 9029.40 47380.10 00:22:27.033 [2024-11-17T08:22:32.046Z] =================================================================================================================== 00:22:27.033 [2024-11-17T08:22:32.046Z] Total : 2617.00 10.22 0.00 0.00 48807.11 9029.40 47380.10 00:22:27.033 { 00:22:27.033 "results": [ 00:22:27.033 { 00:22:27.033 "job": "TLSTESTn1", 00:22:27.033 "core_mask": "0x4", 00:22:27.033 "workload": "verify", 00:22:27.033 "status": "finished", 00:22:27.033 "verify_range": { 00:22:27.033 "start": 0, 00:22:27.033 "length": 8192 00:22:27.033 }, 00:22:27.033 "queue_depth": 128, 00:22:27.033 "io_size": 4096, 00:22:27.033 "runtime": 10.035527, 00:22:27.033 "iops": 2617.002574951968, 00:22:27.033 "mibps": 10.222666308406126, 00:22:27.033 "io_failed": 0, 00:22:27.033 "io_timeout": 0, 00:22:27.033 "avg_latency_us": 48807.11162556533, 00:22:27.033 "min_latency_us": 9029.404444444444, 00:22:27.033 "max_latency_us": 47380.10074074074 00:22:27.033 } 00:22:27.033 ], 00:22:27.033 "core_count": 1 00:22:27.033 } 00:22:27.033 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2996615 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2996615 ']' 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2996615 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996615 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996615' 00:22:27.034 killing process with pid 2996615 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2996615 00:22:27.034 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.034 00:22:27.034 Latency(us) 00:22:27.034 [2024-11-17T08:22:32.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.034 [2024-11-17T08:22:32.047Z] =================================================================================================================== 00:22:27.034 [2024-11-17T08:22:32.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.034 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2996615 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cmCuwfiIJG 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cmCuwfiIJG 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cmCuwfiIJG 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cmCuwfiIJG 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998071 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998071 /var/tmp/bdevperf.sock 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998071 ']' 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.601 09:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.859 [2024-11-17 09:22:32.684400] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:27.859 [2024-11-17 09:22:32.684533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998071 ] 00:22:27.859 [2024-11-17 09:22:32.819331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.117 [2024-11-17 09:22:32.939581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.683 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.683 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:28.683 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cmCuwfiIJG 00:22:29.248 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.248 [2024-11-17 09:22:34.239512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.248 [2024-11-17 09:22:34.249730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:29.248 [2024-11-17 09:22:34.250498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:29.248 [2024-11-17 09:22:34.251474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:29.248 [2024-11-17 09:22:34.252467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:29.248 [2024-11-17 09:22:34.252508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:29.248 [2024-11-17 09:22:34.252533] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:29.248 [2024-11-17 09:22:34.252573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:29.248 request: 00:22:29.248 { 00:22:29.249 "name": "TLSTEST", 00:22:29.249 "trtype": "tcp", 00:22:29.249 "traddr": "10.0.0.2", 00:22:29.249 "adrfam": "ipv4", 00:22:29.249 "trsvcid": "4420", 00:22:29.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.249 "prchk_reftag": false, 00:22:29.249 "prchk_guard": false, 00:22:29.249 "hdgst": false, 00:22:29.249 "ddgst": false, 00:22:29.249 "psk": "key0", 00:22:29.249 "allow_unrecognized_csi": false, 00:22:29.249 "method": "bdev_nvme_attach_controller", 00:22:29.249 "req_id": 1 00:22:29.249 } 00:22:29.249 Got JSON-RPC error response 00:22:29.249 response: 00:22:29.249 { 00:22:29.249 "code": -5, 00:22:29.249 "message": "Input/output error" 00:22:29.249 } 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998071 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998071 ']' 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998071 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998071 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998071' 00:22:29.507 killing process with pid 2998071 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998071 00:22:29.507 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.507 00:22:29.507 Latency(us) 00:22:29.507 [2024-11-17T08:22:34.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.507 [2024-11-17T08:22:34.520Z] =================================================================================================================== 00:22:29.507 [2024-11-17T08:22:34.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.507 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998071 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.58BPkqZ8HK 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.58BPkqZ8HK 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.58BPkqZ8HK 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.58BPkqZ8HK 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998358 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998358 /var/tmp/bdevperf.sock 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998358 ']' 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.443 09:22:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.443 [2024-11-17 09:22:35.205629] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:30.443 [2024-11-17 09:22:35.205801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998358 ] 00:22:30.443 [2024-11-17 09:22:35.358796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.701 [2024-11-17 09:22:35.486884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.267 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.267 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:31.267 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.58BPkqZ8HK 00:22:31.524 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:31.782 [2024-11-17 09:22:36.712604] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.782 [2024-11-17 09:22:36.723679] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:31.782 [2024-11-17 09:22:36.723722] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:31.782 [2024-11-17 09:22:36.723799] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:31.782 [2024-11-17 09:22:36.724555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:31.783 [2024-11-17 09:22:36.725520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:31.783 [2024-11-17 09:22:36.726524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:31.783 [2024-11-17 09:22:36.726555] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:31.783 [2024-11-17 09:22:36.726590] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:31.783 [2024-11-17 09:22:36.726621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:31.783 request: 00:22:31.783 { 00:22:31.783 "name": "TLSTEST", 00:22:31.783 "trtype": "tcp", 00:22:31.783 "traddr": "10.0.0.2", 00:22:31.783 "adrfam": "ipv4", 00:22:31.783 "trsvcid": "4420", 00:22:31.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.783 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.783 "prchk_reftag": false, 00:22:31.783 "prchk_guard": false, 00:22:31.783 "hdgst": false, 00:22:31.783 "ddgst": false, 00:22:31.783 "psk": "key0", 00:22:31.783 "allow_unrecognized_csi": false, 00:22:31.783 "method": "bdev_nvme_attach_controller", 00:22:31.783 "req_id": 1 00:22:31.783 } 00:22:31.783 Got JSON-RPC error response 00:22:31.783 response: 00:22:31.783 { 00:22:31.783 "code": -5, 00:22:31.783 "message": "Input/output error" 00:22:31.783 } 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998358 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998358 ']' 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998358 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998358 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998358' 00:22:31.783 killing process with pid 2998358 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998358 00:22:31.783 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.783 00:22:31.783 Latency(us) 00:22:31.783 [2024-11-17T08:22:36.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.783 [2024-11-17T08:22:36.796Z] =================================================================================================================== 00:22:31.783 [2024-11-17T08:22:36.796Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.783 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998358 00:22:32.716 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:32.716 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:32.716 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.716 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.58BPkqZ8HK 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.58BPkqZ8HK 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.58BPkqZ8HK 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.58BPkqZ8HK 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998633 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998633 /var/tmp/bdevperf.sock 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998633 ']' 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.717 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.717 [2024-11-17 09:22:37.656305] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:32.717 [2024-11-17 09:22:37.656463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998633 ] 00:22:32.975 [2024-11-17 09:22:37.788238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.975 [2024-11-17 09:22:37.906150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.909 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.909 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:33.909 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.58BPkqZ8HK 00:22:33.909 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:34.167 [2024-11-17 09:22:39.172288] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.425 [2024-11-17 09:22:39.182633] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:34.425 [2024-11-17 09:22:39.182701] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:34.425 [2024-11-17 09:22:39.182801] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.425 [2024-11-17 09:22:39.182830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.425 [2024-11-17 09:22:39.183782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:34.425 [2024-11-17 09:22:39.184781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:34.425 [2024-11-17 09:22:39.184816] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:34.425 [2024-11-17 09:22:39.184842] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:34.425 [2024-11-17 09:22:39.184869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:34.425 request: 00:22:34.425 { 00:22:34.425 "name": "TLSTEST", 00:22:34.425 "trtype": "tcp", 00:22:34.425 "traddr": "10.0.0.2", 00:22:34.425 "adrfam": "ipv4", 00:22:34.425 "trsvcid": "4420", 00:22:34.425 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:34.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.425 "prchk_reftag": false, 00:22:34.425 "prchk_guard": false, 00:22:34.425 "hdgst": false, 00:22:34.425 "ddgst": false, 00:22:34.425 "psk": "key0", 00:22:34.425 "allow_unrecognized_csi": false, 00:22:34.425 "method": "bdev_nvme_attach_controller", 00:22:34.425 "req_id": 1 00:22:34.425 } 00:22:34.425 Got JSON-RPC error response 00:22:34.425 response: 00:22:34.425 { 00:22:34.425 "code": -5, 00:22:34.425 "message": "Input/output error" 00:22:34.425 } 00:22:34.425 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998633 00:22:34.425 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998633 ']' 00:22:34.425 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998633 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998633 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998633' 00:22:34.426 killing process with pid 2998633 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998633 00:22:34.426 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.426 00:22:34.426 Latency(us) 00:22:34.426 [2024-11-17T08:22:39.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.426 [2024-11-17T08:22:39.439Z] =================================================================================================================== 00:22:34.426 [2024-11-17T08:22:39.439Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.426 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998633 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998905 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998905 /var/tmp/bdevperf.sock 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998905 ']' 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.360 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.360 [2024-11-17 09:22:40.140943] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:35.360 [2024-11-17 09:22:40.141093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998905 ] 00:22:35.360 [2024-11-17 09:22:40.279487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.618 [2024-11-17 09:22:40.412514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.185 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.185 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:36.185 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:36.442 [2024-11-17 09:22:41.449868] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:36.442 [2024-11-17 09:22:41.449958] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:36.701 request: 00:22:36.701 { 00:22:36.701 "name": "key0", 00:22:36.701 "path": "", 00:22:36.701 "method": "keyring_file_add_key", 00:22:36.701 "req_id": 1 00:22:36.701 } 00:22:36.701 Got JSON-RPC error response 00:22:36.701 response: 00:22:36.701 { 00:22:36.701 "code": -1, 00:22:36.701 "message": "Operation not permitted" 00:22:36.701 } 00:22:36.701 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:36.959 [2024-11-17 09:22:41.774897] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.959 [2024-11-17 09:22:41.774960] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:36.959 request: 00:22:36.959 { 00:22:36.959 "name": "TLSTEST", 00:22:36.959 "trtype": "tcp", 00:22:36.959 "traddr": "10.0.0.2", 00:22:36.959 "adrfam": "ipv4", 00:22:36.959 "trsvcid": "4420", 00:22:36.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.959 "prchk_reftag": false, 00:22:36.959 "prchk_guard": false, 00:22:36.959 "hdgst": false, 00:22:36.959 "ddgst": false, 00:22:36.959 "psk": "key0", 00:22:36.959 "allow_unrecognized_csi": false, 00:22:36.959 "method": "bdev_nvme_attach_controller", 00:22:36.959 "req_id": 1 00:22:36.959 } 00:22:36.959 Got JSON-RPC error response 00:22:36.959 response: 00:22:36.959 { 00:22:36.959 "code": -126, 00:22:36.959 "message": "Required key not available" 00:22:36.959 } 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998905 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998905 ']' 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998905 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998905 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998905' 00:22:36.959 killing process with pid 2998905 00:22:36.959 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998905 00:22:36.959 Received shutdown signal, test time was about 10.000000 seconds 00:22:36.959 00:22:36.959 Latency(us) 00:22:36.959 [2024-11-17T08:22:41.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.959 [2024-11-17T08:22:41.972Z] =================================================================================================================== 00:22:36.959 [2024-11-17T08:22:41.972Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.960 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998905 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2994464 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2994464 ']' 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2994464 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2994464 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2994464' 00:22:37.894 killing process with pid 2994464 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2994464 00:22:37.894 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2994464 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:39.270 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.vZujivjOiX 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.vZujivjOiX 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2999443 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2999443 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999443 ']' 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.270 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.270 [2024-11-17 09:22:44.132477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:39.270 [2024-11-17 09:22:44.132626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.270 [2024-11-17 09:22:44.279202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.528 [2024-11-17 09:22:44.410142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.528 [2024-11-17 09:22:44.410244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.528 [2024-11-17 09:22:44.410271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.528 [2024-11-17 09:22:44.410296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.528 [2024-11-17 09:22:44.410315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.528 [2024-11-17 09:22:44.411934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.vZujivjOiX 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vZujivjOiX 00:22:40.094 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:40.392 [2024-11-17 09:22:45.348253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.392 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.675 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.932 [2024-11-17 09:22:45.865716] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.932 [2024-11-17 09:22:45.866047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.932 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:41.190 malloc0 00:22:41.190 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:41.756 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:22:42.013 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:42.271 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vZujivjOiX 00:22:42.271 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:42.271 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vZujivjOiX 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2999812 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2999812 /var/tmp/bdevperf.sock 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999812 ']' 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.272 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.272 [2024-11-17 09:22:47.147595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:42.272 [2024-11-17 09:22:47.147767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999812 ] 00:22:42.530 [2024-11-17 09:22:47.285805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.530 [2024-11-17 09:22:47.409698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.463 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.463 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.463 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:22:43.463 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.721 [2024-11-17 09:22:48.632897] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.721 TLSTESTn1 00:22:43.978 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:43.978 Running I/O for 10 seconds... 00:22:45.846 2560.00 IOPS, 10.00 MiB/s [2024-11-17T08:22:52.233Z] 2591.50 IOPS, 10.12 MiB/s [2024-11-17T08:22:53.167Z] 2611.00 IOPS, 10.20 MiB/s [2024-11-17T08:22:54.102Z] 2623.50 IOPS, 10.25 MiB/s [2024-11-17T08:22:55.036Z] 2629.20 IOPS, 10.27 MiB/s [2024-11-17T08:22:55.970Z] 2633.33 IOPS, 10.29 MiB/s [2024-11-17T08:22:56.904Z] 2635.14 IOPS, 10.29 MiB/s [2024-11-17T08:22:58.278Z] 2637.25 IOPS, 10.30 MiB/s [2024-11-17T08:22:59.212Z] 2642.89 IOPS, 10.32 MiB/s [2024-11-17T08:22:59.212Z] 2646.70 IOPS, 10.34 MiB/s 00:22:54.199 Latency(us) 00:22:54.199 [2024-11-17T08:22:59.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.199 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.199 Verification LBA range: start 0x0 length 0x2000 00:22:54.199 TLSTESTn1 : 10.03 2651.61 10.36 0.00 0.00 48183.13 8009.96 62137.84 00:22:54.199 [2024-11-17T08:22:59.212Z] =================================================================================================================== 00:22:54.199 [2024-11-17T08:22:59.212Z] Total : 2651.61 10.36 0.00 0.00 48183.13 8009.96 62137.84 00:22:54.199 { 00:22:54.199 "results": [ 00:22:54.199 { 00:22:54.199 "job": "TLSTESTn1", 00:22:54.199 "core_mask": "0x4", 00:22:54.199 "workload": "verify", 00:22:54.199 "status": "finished", 00:22:54.199 "verify_range": { 00:22:54.199 "start": 0, 00:22:54.199 "length": 8192 00:22:54.199 }, 00:22:54.199 "queue_depth": 128, 00:22:54.199 "io_size": 4096, 00:22:54.199 "runtime": 10.029371, 00:22:54.199 "iops": 2651.611950540069, 00:22:54.199 "mibps": 10.357859181797144, 00:22:54.199 "io_failed": 0, 00:22:54.199 "io_timeout": 0, 00:22:54.199 "avg_latency_us": 48183.13072544907, 00:22:54.199 "min_latency_us": 8009.955555555555, 00:22:54.199 "max_latency_us": 62137.83703703704 00:22:54.199 } 00:22:54.199 ], 00:22:54.199 "core_count": 1 00:22:54.199 } 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2999812 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999812 ']' 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999812 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.199 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999812 00:22:54.200 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.200 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.200 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999812' 00:22:54.200 killing process with pid 2999812 00:22:54.200 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999812 00:22:54.200 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.200 00:22:54.200 Latency(us) 00:22:54.200 [2024-11-17T08:22:59.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.200 [2024-11-17T08:22:59.213Z] =================================================================================================================== 00:22:54.200 [2024-11-17T08:22:59.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.200 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999812 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.vZujivjOiX 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vZujivjOiX 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vZujivjOiX 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vZujivjOiX 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vZujivjOiX 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3001318 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3001318 /var/tmp/bdevperf.sock 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3001318 ']' 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.133 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.133 [2024-11-17 09:22:59.885362] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:55.133 [2024-11-17 09:22:59.885520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3001318 ] 00:22:55.133 [2024-11-17 09:23:00.021122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.392 [2024-11-17 09:23:00.147860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.958 09:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.958 09:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.958 09:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:22:56.216 [2024-11-17 09:23:01.184267] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vZujivjOiX': 0100666 00:22:56.216 [2024-11-17 09:23:01.184320] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:56.216 request: 00:22:56.216 { 00:22:56.216 "name": "key0", 00:22:56.216 "path": "/tmp/tmp.vZujivjOiX", 00:22:56.216 "method": "keyring_file_add_key", 00:22:56.216 "req_id": 1 00:22:56.216 } 00:22:56.216 Got JSON-RPC error response 00:22:56.216 response: 00:22:56.216 { 00:22:56.216 "code": -1, 00:22:56.216 "message": "Operation not permitted" 00:22:56.216 } 00:22:56.216 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.474 [2024-11-17 09:23:01.449127] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.474 [2024-11-17 09:23:01.449203] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:56.474 request: 00:22:56.474 { 00:22:56.474 "name": "TLSTEST", 00:22:56.474 "trtype": "tcp", 00:22:56.474 "traddr": "10.0.0.2", 00:22:56.474 "adrfam": "ipv4", 00:22:56.474 "trsvcid": "4420", 00:22:56.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.474 "prchk_reftag": false, 00:22:56.474 "prchk_guard": false, 00:22:56.474 "hdgst": false, 00:22:56.474 "ddgst": false, 00:22:56.474 "psk": "key0", 00:22:56.474 "allow_unrecognized_csi": false, 00:22:56.474 "method": "bdev_nvme_attach_controller", 00:22:56.474 "req_id": 1 00:22:56.474 } 00:22:56.474 Got JSON-RPC error response 00:22:56.474 response: 00:22:56.474 { 00:22:56.474 "code": -126, 00:22:56.474 "message": "Required key not available" 00:22:56.474 } 00:22:56.474 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3001318 00:22:56.474 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3001318 ']' 00:22:56.474 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3001318 00:22:56.474 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:56.474 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.474 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001318 00:22:56.733 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:56.733 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:56.733 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001318' 00:22:56.733 killing process with pid 3001318 00:22:56.733 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3001318 00:22:56.733 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.733 00:22:56.733 Latency(us) 00:22:56.733 [2024-11-17T08:23:01.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.733 [2024-11-17T08:23:01.746Z] =================================================================================================================== 00:22:56.733 [2024-11-17T08:23:01.746Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.733 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3001318 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2999443 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999443 ']' 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999443 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.299 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999443 00:22:57.557 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.557 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.557 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999443' 00:22:57.557 killing process with pid 2999443 00:22:57.557 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999443 00:22:57.557 09:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999443 00:22:58.492 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:58.492 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.492 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.492 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3001735 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3001735 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3001735 ']' 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.493 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.751 [2024-11-17 09:23:03.589015] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:58.751 [2024-11-17 09:23:03.589176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.751 [2024-11-17 09:23:03.740555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.009 [2024-11-17 09:23:03.877088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.009 [2024-11-17 09:23:03.877191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.009 [2024-11-17 09:23:03.877218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.009 [2024-11-17 09:23:03.877249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.009 [2024-11-17 09:23:03.877270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.009 [2024-11-17 09:23:03.878959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.576 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.576 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.576 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.576 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.576 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.vZujivjOiX 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vZujivjOiX 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.vZujivjOiX 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vZujivjOiX 00:22:59.835 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:59.835 [2024-11-17 09:23:04.838533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.093 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:00.351 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:00.610 [2024-11-17 09:23:05.424122] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.610 [2024-11-17 09:23:05.424518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.610 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:00.869 malloc0 00:23:00.869 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.127 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:23:01.385 [2024-11-17 09:23:06.263943] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vZujivjOiX': 0100666 00:23:01.385 [2024-11-17 09:23:06.263997] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:01.385 request: 00:23:01.385 { 00:23:01.385 "name": "key0", 00:23:01.385 "path": "/tmp/tmp.vZujivjOiX", 00:23:01.385 "method": "keyring_file_add_key", 00:23:01.385 "req_id": 1 00:23:01.385 } 00:23:01.385 Got JSON-RPC error response 00:23:01.385 response: 00:23:01.386 { 00:23:01.386 "code": -1, 00:23:01.386 "message": "Operation not permitted" 00:23:01.386 } 00:23:01.386 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.645 [2024-11-17 09:23:06.528796] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:01.645 [2024-11-17 09:23:06.528893] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:01.645 request: 00:23:01.645 { 00:23:01.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.645 "host": "nqn.2016-06.io.spdk:host1", 00:23:01.645 "psk": "key0", 00:23:01.645 "method": "nvmf_subsystem_add_host", 00:23:01.645 "req_id": 1 00:23:01.645 } 00:23:01.645 Got JSON-RPC error response 00:23:01.645 response: 00:23:01.645 { 00:23:01.645 "code": -32603, 00:23:01.645 "message": "Internal error" 00:23:01.645 } 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3001735 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3001735 ']' 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3001735 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001735 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001735' 00:23:01.645 killing process with pid 3001735 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3001735 00:23:01.645 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3001735 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.vZujivjOiX 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002292 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002292 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002292 ']' 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.017 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.017 [2024-11-17 09:23:07.875581] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:03.017 [2024-11-17 09:23:07.875748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.276 [2024-11-17 09:23:08.029264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.276 [2024-11-17 09:23:08.166843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.276 [2024-11-17 09:23:08.166935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.276 [2024-11-17 09:23:08.166961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.276 [2024-11-17 09:23:08.166986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.276 [2024-11-17 09:23:08.167006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.276 [2024-11-17 09:23:08.168626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.vZujivjOiX 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vZujivjOiX 00:23:03.843 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.101 [2024-11-17 09:23:09.091258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.101 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.667 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.667 [2024-11-17 09:23:09.636800] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.667 [2024-11-17 09:23:09.637184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.667 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.233 malloc0 00:23:05.233 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.491 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:23:05.749 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.007 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3002595 00:23:06.007 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.007 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.007 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3002595 /var/tmp/bdevperf.sock 00:23:06.007 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002595 ']' 00:23:06.007 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.008 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.008 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.008 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.008 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.266 [2024-11-17 09:23:11.019565] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:06.266 [2024-11-17 09:23:11.019715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002595 ] 00:23:06.266 [2024-11-17 09:23:11.158420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.524 [2024-11-17 09:23:11.280137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.103 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.103 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.103 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:23:07.363 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.621 [2024-11-17 09:23:12.536237] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.621 TLSTESTn1 00:23:07.879 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:08.137 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:08.137 "subsystems": [ 00:23:08.137 { 00:23:08.137 "subsystem": "keyring", 00:23:08.137 "config": [ 00:23:08.137 { 00:23:08.137 "method": "keyring_file_add_key", 00:23:08.137 "params": { 00:23:08.137 "name": "key0", 00:23:08.137 "path": "/tmp/tmp.vZujivjOiX" 00:23:08.137 } 00:23:08.137 } 00:23:08.137 ] 00:23:08.137 }, 00:23:08.137 { 00:23:08.137 "subsystem": "iobuf", 00:23:08.137 "config": [ 00:23:08.137 { 00:23:08.137 "method": "iobuf_set_options", 00:23:08.137 "params": { 00:23:08.137 "small_pool_count": 8192, 00:23:08.137 "large_pool_count": 1024, 00:23:08.137 "small_bufsize": 8192, 00:23:08.137 "large_bufsize": 135168, 00:23:08.137 "enable_numa": false 00:23:08.137 } 00:23:08.137 } 00:23:08.137 ] 00:23:08.137 }, 00:23:08.137 { 00:23:08.137 "subsystem": "sock", 00:23:08.137 "config": [ 00:23:08.137 { 00:23:08.137 "method": "sock_set_default_impl", 00:23:08.137 "params": { 00:23:08.137 "impl_name": "posix" 00:23:08.137 } 00:23:08.137 }, 00:23:08.137 { 00:23:08.137 "method": "sock_impl_set_options", 00:23:08.137 "params": { 00:23:08.137 "impl_name": "ssl", 00:23:08.137 "recv_buf_size": 4096, 00:23:08.137 "send_buf_size": 4096, 00:23:08.137 "enable_recv_pipe": true, 00:23:08.137 "enable_quickack": false, 00:23:08.137 "enable_placement_id": 0, 00:23:08.137 "enable_zerocopy_send_server": true, 00:23:08.137 "enable_zerocopy_send_client": false, 00:23:08.137 "zerocopy_threshold": 0, 00:23:08.137 "tls_version": 0, 00:23:08.137 "enable_ktls": false 00:23:08.137 } 00:23:08.137 }, 00:23:08.137 { 00:23:08.137 "method": "sock_impl_set_options", 00:23:08.137 "params": { 00:23:08.137 "impl_name": "posix", 00:23:08.137 "recv_buf_size": 2097152, 00:23:08.137 "send_buf_size": 2097152, 00:23:08.137 "enable_recv_pipe": true, 00:23:08.137 "enable_quickack": false, 00:23:08.137 "enable_placement_id": 0, 00:23:08.137 "enable_zerocopy_send_server": true, 00:23:08.137 "enable_zerocopy_send_client": false, 00:23:08.137 "zerocopy_threshold": 0, 00:23:08.137 "tls_version": 0, 00:23:08.138 "enable_ktls": false 00:23:08.138 } 00:23:08.138 } 00:23:08.138 ] 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "subsystem": "vmd", 00:23:08.138 "config": [] 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "subsystem": "accel", 00:23:08.138 "config": [ 00:23:08.138 { 00:23:08.138 "method": "accel_set_options", 00:23:08.138 "params": { 00:23:08.138 "small_cache_size": 128, 00:23:08.138 "large_cache_size": 16, 00:23:08.138 "task_count": 2048, 00:23:08.138 "sequence_count": 2048, 00:23:08.138 "buf_count": 2048 00:23:08.138 } 00:23:08.138 } 00:23:08.138 ] 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "subsystem": "bdev", 00:23:08.138 "config": [ 00:23:08.138 { 00:23:08.138 "method": "bdev_set_options", 00:23:08.138 "params": { 00:23:08.138 "bdev_io_pool_size": 65535, 00:23:08.138 "bdev_io_cache_size": 256, 00:23:08.138 "bdev_auto_examine": true, 00:23:08.138 "iobuf_small_cache_size": 128, 00:23:08.138 "iobuf_large_cache_size": 16 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "bdev_raid_set_options", 00:23:08.138 "params": { 00:23:08.138 "process_window_size_kb": 1024, 00:23:08.138 "process_max_bandwidth_mb_sec": 0 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "bdev_iscsi_set_options", 00:23:08.138 "params": { 00:23:08.138 "timeout_sec": 30 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "bdev_nvme_set_options", 00:23:08.138 "params": { 00:23:08.138 "action_on_timeout": "none", 00:23:08.138 "timeout_us": 0, 00:23:08.138 "timeout_admin_us": 0, 00:23:08.138 "keep_alive_timeout_ms": 10000, 00:23:08.138 "arbitration_burst": 0, 00:23:08.138 "low_priority_weight": 0, 00:23:08.138 "medium_priority_weight": 0, 00:23:08.138 "high_priority_weight": 0, 00:23:08.138 "nvme_adminq_poll_period_us": 10000, 00:23:08.138 "nvme_ioq_poll_period_us": 0, 00:23:08.138 "io_queue_requests": 0, 00:23:08.138 "delay_cmd_submit": true, 00:23:08.138 "transport_retry_count": 4, 00:23:08.138 "bdev_retry_count": 3, 00:23:08.138 "transport_ack_timeout": 0, 00:23:08.138 "ctrlr_loss_timeout_sec": 0, 00:23:08.138 "reconnect_delay_sec": 0, 00:23:08.138 "fast_io_fail_timeout_sec": 0, 00:23:08.138 "disable_auto_failback": false, 00:23:08.138 "generate_uuids": false, 00:23:08.138 "transport_tos": 0, 00:23:08.138 "nvme_error_stat": false, 00:23:08.138 "rdma_srq_size": 0, 00:23:08.138 "io_path_stat": false, 00:23:08.138 "allow_accel_sequence": false, 00:23:08.138 "rdma_max_cq_size": 0, 00:23:08.138 "rdma_cm_event_timeout_ms": 0, 00:23:08.138 "dhchap_digests": [ 00:23:08.138 "sha256", 00:23:08.138 "sha384", 00:23:08.138 "sha512" 00:23:08.138 ], 00:23:08.138 "dhchap_dhgroups": [ 00:23:08.138 "null", 00:23:08.138 "ffdhe2048", 00:23:08.138 "ffdhe3072", 00:23:08.138 "ffdhe4096", 00:23:08.138 "ffdhe6144", 00:23:08.138 "ffdhe8192" 00:23:08.138 ] 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "bdev_nvme_set_hotplug", 00:23:08.138 "params": { 00:23:08.138 "period_us": 100000, 00:23:08.138 "enable": false 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "bdev_malloc_create", 00:23:08.138 "params": { 00:23:08.138 "name": "malloc0", 00:23:08.138 "num_blocks": 8192, 00:23:08.138 "block_size": 4096, 00:23:08.138 "physical_block_size": 4096, 00:23:08.138 "uuid": "3aa42ffe-bfe9-4163-9b08-b46ea6e8b8c4", 00:23:08.138 "optimal_io_boundary": 0, 00:23:08.138 "md_size": 0, 00:23:08.138 "dif_type": 0, 00:23:08.138 "dif_is_head_of_md": false, 00:23:08.138 "dif_pi_format": 0 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "bdev_wait_for_examine" 00:23:08.138 } 00:23:08.138 ] 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "subsystem": "nbd", 00:23:08.138 "config": [] 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "subsystem": "scheduler", 00:23:08.138 "config": [ 00:23:08.138 { 00:23:08.138 "method": "framework_set_scheduler", 00:23:08.138 "params": { 00:23:08.138 "name": "static" 00:23:08.138 } 00:23:08.138 } 00:23:08.138 ] 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "subsystem": "nvmf", 00:23:08.138 "config": [ 00:23:08.138 { 00:23:08.138 "method": "nvmf_set_config", 00:23:08.138 "params": { 00:23:08.138 "discovery_filter": "match_any", 00:23:08.138 "admin_cmd_passthru": { 00:23:08.138 "identify_ctrlr": false 00:23:08.138 }, 00:23:08.138 "dhchap_digests": [ 00:23:08.138 "sha256", 00:23:08.138 "sha384", 00:23:08.138 "sha512" 00:23:08.138 ], 00:23:08.138 "dhchap_dhgroups": [ 00:23:08.138 "null", 00:23:08.138 "ffdhe2048", 00:23:08.138 "ffdhe3072", 00:23:08.138 "ffdhe4096", 00:23:08.138 "ffdhe6144", 00:23:08.138 "ffdhe8192" 00:23:08.138 ] 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_set_max_subsystems", 00:23:08.138 "params": { 00:23:08.138 "max_subsystems": 1024 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_set_crdt", 00:23:08.138 "params": { 00:23:08.138 "crdt1": 0, 00:23:08.138 "crdt2": 0, 00:23:08.138 "crdt3": 0 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_create_transport", 00:23:08.138 "params": { 00:23:08.138 "trtype": "TCP", 00:23:08.138 "max_queue_depth": 128, 00:23:08.138 "max_io_qpairs_per_ctrlr": 127, 00:23:08.138 "in_capsule_data_size": 4096, 00:23:08.138 "max_io_size": 131072, 00:23:08.138 "io_unit_size": 131072, 00:23:08.138 "max_aq_depth": 128, 00:23:08.138 "num_shared_buffers": 511, 00:23:08.138 "buf_cache_size": 4294967295, 00:23:08.138 "dif_insert_or_strip": false, 00:23:08.138 "zcopy": false, 00:23:08.138 "c2h_success": false, 00:23:08.138 "sock_priority": 0, 00:23:08.138 "abort_timeout_sec": 1, 00:23:08.138 "ack_timeout": 0, 00:23:08.138 "data_wr_pool_size": 0 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_create_subsystem", 00:23:08.138 "params": { 00:23:08.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.138 "allow_any_host": false, 00:23:08.138 "serial_number": "SPDK00000000000001", 00:23:08.138 "model_number": "SPDK bdev Controller", 00:23:08.138 "max_namespaces": 10, 00:23:08.138 "min_cntlid": 1, 00:23:08.138 "max_cntlid": 65519, 00:23:08.138 "ana_reporting": false 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_subsystem_add_host", 00:23:08.138 "params": { 00:23:08.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.138 "host": "nqn.2016-06.io.spdk:host1", 00:23:08.138 "psk": "key0" 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_subsystem_add_ns", 00:23:08.138 "params": { 00:23:08.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.138 "namespace": { 00:23:08.138 "nsid": 1, 00:23:08.138 "bdev_name": "malloc0", 00:23:08.138 "nguid": "3AA42FFEBFE941639B08B46EA6E8B8C4", 00:23:08.138 "uuid": "3aa42ffe-bfe9-4163-9b08-b46ea6e8b8c4", 00:23:08.138 "no_auto_visible": false 00:23:08.138 } 00:23:08.138 } 00:23:08.138 }, 00:23:08.138 { 00:23:08.138 "method": "nvmf_subsystem_add_listener", 00:23:08.138 "params": { 00:23:08.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.138 "listen_address": { 00:23:08.138 "trtype": "TCP", 00:23:08.138 "adrfam": "IPv4", 00:23:08.138 "traddr": "10.0.0.2", 00:23:08.138 "trsvcid": "4420" 00:23:08.138 }, 00:23:08.139 "secure_channel": true 00:23:08.139 } 00:23:08.139 } 00:23:08.139 ] 00:23:08.139 } 00:23:08.139 ] 00:23:08.139 }' 00:23:08.139 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:08.397 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:08.397 "subsystems": [ 00:23:08.397 { 00:23:08.397 "subsystem": "keyring", 00:23:08.397 "config": [ 00:23:08.397 { 00:23:08.397 "method": "keyring_file_add_key", 00:23:08.397 "params": { 00:23:08.397 "name": "key0", 00:23:08.397 "path": "/tmp/tmp.vZujivjOiX" 00:23:08.397 } 00:23:08.397 } 00:23:08.397 ] 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "subsystem": "iobuf", 00:23:08.397 "config": [ 00:23:08.397 { 00:23:08.397 "method": "iobuf_set_options", 00:23:08.397 "params": { 00:23:08.397 "small_pool_count": 8192, 00:23:08.397 "large_pool_count": 1024, 00:23:08.397 "small_bufsize": 8192, 00:23:08.397 "large_bufsize": 135168, 00:23:08.397 "enable_numa": false 00:23:08.397 } 00:23:08.397 } 00:23:08.397 ] 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "subsystem": "sock", 00:23:08.397 "config": [ 00:23:08.397 { 00:23:08.397 "method": "sock_set_default_impl", 00:23:08.397 "params": { 00:23:08.397 "impl_name": "posix" 00:23:08.397 } 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "method": "sock_impl_set_options", 00:23:08.397 "params": { 00:23:08.397 "impl_name": "ssl", 00:23:08.397 "recv_buf_size": 4096, 00:23:08.397 "send_buf_size": 4096, 00:23:08.397 "enable_recv_pipe": true, 00:23:08.397 "enable_quickack": false, 00:23:08.397 "enable_placement_id": 0, 00:23:08.397 "enable_zerocopy_send_server": true, 00:23:08.397 "enable_zerocopy_send_client": false, 00:23:08.397 "zerocopy_threshold": 0, 00:23:08.397 "tls_version": 0, 00:23:08.397 "enable_ktls": false 00:23:08.397 } 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "method": "sock_impl_set_options", 00:23:08.397 "params": { 00:23:08.397 "impl_name": "posix", 00:23:08.397 "recv_buf_size": 2097152, 00:23:08.397 "send_buf_size": 2097152, 00:23:08.397 "enable_recv_pipe": true, 00:23:08.397 "enable_quickack": false, 00:23:08.397 "enable_placement_id": 0, 00:23:08.397 "enable_zerocopy_send_server": true, 00:23:08.397 "enable_zerocopy_send_client": false, 00:23:08.397 "zerocopy_threshold": 0, 00:23:08.397 "tls_version": 0, 00:23:08.397 "enable_ktls": false 00:23:08.397 } 00:23:08.397 } 00:23:08.397 ] 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "subsystem": "vmd", 00:23:08.397 "config": [] 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "subsystem": "accel", 00:23:08.397 "config": [ 00:23:08.397 { 00:23:08.397 "method": "accel_set_options", 00:23:08.397 "params": { 00:23:08.397 "small_cache_size": 128, 00:23:08.397 "large_cache_size": 16, 00:23:08.397 "task_count": 2048, 00:23:08.397 "sequence_count": 2048, 00:23:08.397 "buf_count": 2048 00:23:08.397 } 00:23:08.397 } 00:23:08.397 ] 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "subsystem": "bdev", 00:23:08.397 "config": [ 00:23:08.397 { 00:23:08.397 "method": "bdev_set_options", 00:23:08.397 "params": { 00:23:08.397 "bdev_io_pool_size": 65535, 00:23:08.397 "bdev_io_cache_size": 256, 00:23:08.397 "bdev_auto_examine": true, 00:23:08.397 "iobuf_small_cache_size": 128, 00:23:08.397 "iobuf_large_cache_size": 16 00:23:08.397 } 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "method": "bdev_raid_set_options", 00:23:08.397 "params": { 00:23:08.397 "process_window_size_kb": 1024, 00:23:08.397 "process_max_bandwidth_mb_sec": 0 00:23:08.397 } 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "method": "bdev_iscsi_set_options", 00:23:08.397 "params": { 00:23:08.397 "timeout_sec": 30 00:23:08.397 } 00:23:08.397 }, 00:23:08.397 { 00:23:08.397 "method": "bdev_nvme_set_options", 00:23:08.397 "params": { 00:23:08.397 "action_on_timeout": "none", 00:23:08.397 "timeout_us": 0, 00:23:08.397 "timeout_admin_us": 0, 00:23:08.397 "keep_alive_timeout_ms": 10000, 00:23:08.397 "arbitration_burst": 0, 00:23:08.397 "low_priority_weight": 0, 00:23:08.397 "medium_priority_weight": 0, 00:23:08.397 "high_priority_weight": 0, 00:23:08.397 "nvme_adminq_poll_period_us": 10000, 00:23:08.397 "nvme_ioq_poll_period_us": 0, 00:23:08.397 "io_queue_requests": 512, 00:23:08.397 "delay_cmd_submit": true, 00:23:08.397 "transport_retry_count": 4, 00:23:08.397 "bdev_retry_count": 3, 00:23:08.397 "transport_ack_timeout": 0, 00:23:08.397 "ctrlr_loss_timeout_sec": 0, 00:23:08.397 "reconnect_delay_sec": 0, 00:23:08.397 "fast_io_fail_timeout_sec": 0, 00:23:08.397 "disable_auto_failback": false, 00:23:08.397 "generate_uuids": false, 00:23:08.397 "transport_tos": 0, 00:23:08.397 "nvme_error_stat": false, 00:23:08.397 "rdma_srq_size": 0, 00:23:08.397 "io_path_stat": false, 00:23:08.397 "allow_accel_sequence": false, 00:23:08.397 "rdma_max_cq_size": 0, 00:23:08.397 "rdma_cm_event_timeout_ms": 0, 00:23:08.397 "dhchap_digests": [ 00:23:08.397 "sha256", 00:23:08.397 "sha384", 00:23:08.397 "sha512" 00:23:08.397 ], 00:23:08.397 "dhchap_dhgroups": [ 00:23:08.397 "null", 00:23:08.398 "ffdhe2048", 00:23:08.398 "ffdhe3072", 00:23:08.398 "ffdhe4096", 00:23:08.398 "ffdhe6144", 00:23:08.398 "ffdhe8192" 00:23:08.398 ] 00:23:08.398 } 00:23:08.398 }, 00:23:08.398 { 00:23:08.398 "method": "bdev_nvme_attach_controller", 00:23:08.398 "params": { 00:23:08.398 "name": "TLSTEST", 00:23:08.398 "trtype": "TCP", 00:23:08.398 "adrfam": "IPv4", 00:23:08.398 "traddr": "10.0.0.2", 00:23:08.398 "trsvcid": "4420", 00:23:08.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.398 "prchk_reftag": false, 00:23:08.398 "prchk_guard": false, 00:23:08.398 "ctrlr_loss_timeout_sec": 0, 00:23:08.398 "reconnect_delay_sec": 0, 00:23:08.398 "fast_io_fail_timeout_sec": 0, 00:23:08.398 "psk": "key0", 00:23:08.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.398 "hdgst": false, 00:23:08.398 "ddgst": false, 00:23:08.398 "multipath": "multipath" 00:23:08.398 } 00:23:08.398 }, 00:23:08.398 { 00:23:08.398 "method": "bdev_nvme_set_hotplug", 00:23:08.398 "params": { 00:23:08.398 "period_us": 100000, 00:23:08.398 "enable": false 00:23:08.398 } 00:23:08.398 }, 00:23:08.398 { 00:23:08.398 "method": "bdev_wait_for_examine" 00:23:08.398 } 00:23:08.398 ] 00:23:08.398 }, 00:23:08.398 { 00:23:08.398 "subsystem": "nbd", 00:23:08.398 "config": [] 00:23:08.398 } 00:23:08.398 ] 00:23:08.398 }' 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3002595 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002595 ']' 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002595 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002595 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002595' 00:23:08.398 killing process with pid 3002595 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002595 00:23:08.398 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.398 00:23:08.398 Latency(us) 00:23:08.398 [2024-11-17T08:23:13.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.398 [2024-11-17T08:23:13.411Z] =================================================================================================================== 00:23:08.398 [2024-11-17T08:23:13.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.398 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002595 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3002292 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002292 ']' 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002292 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002292 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002292' 00:23:09.332 killing process with pid 3002292 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002292 00:23:09.332 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002292 00:23:10.707 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:10.707 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.707 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:10.707 "subsystems": [ 00:23:10.707 { 00:23:10.707 "subsystem": "keyring", 00:23:10.707 "config": [ 00:23:10.707 { 00:23:10.707 "method": "keyring_file_add_key", 00:23:10.707 "params": { 00:23:10.707 "name": "key0", 00:23:10.707 "path": "/tmp/tmp.vZujivjOiX" 00:23:10.707 } 00:23:10.707 } 00:23:10.707 ] 00:23:10.707 }, 00:23:10.707 { 00:23:10.707 "subsystem": "iobuf", 00:23:10.707 "config": [ 00:23:10.707 { 00:23:10.707 "method": "iobuf_set_options", 00:23:10.707 "params": { 00:23:10.707 "small_pool_count": 8192, 00:23:10.707 "large_pool_count": 1024, 00:23:10.707 "small_bufsize": 8192, 00:23:10.707 "large_bufsize": 135168, 00:23:10.707 "enable_numa": false 00:23:10.707 } 00:23:10.707 } 00:23:10.707 ] 00:23:10.707 }, 00:23:10.707 { 00:23:10.707 "subsystem": "sock", 00:23:10.707 "config": [ 00:23:10.707 { 00:23:10.707 "method": "sock_set_default_impl", 00:23:10.707 "params": { 00:23:10.707 "impl_name": "posix" 00:23:10.707 } 00:23:10.707 }, 00:23:10.707 { 00:23:10.707 "method": "sock_impl_set_options", 00:23:10.707 "params": { 00:23:10.707 "impl_name": "ssl", 00:23:10.707 "recv_buf_size": 4096, 00:23:10.707 "send_buf_size": 4096, 00:23:10.707 "enable_recv_pipe": true, 00:23:10.707 "enable_quickack": false, 00:23:10.707 "enable_placement_id": 0, 00:23:10.707 "enable_zerocopy_send_server": true, 00:23:10.707 "enable_zerocopy_send_client": false, 00:23:10.707 "zerocopy_threshold": 0, 00:23:10.707 "tls_version": 0, 00:23:10.707 "enable_ktls": false 00:23:10.707 } 00:23:10.707 }, 00:23:10.707 { 00:23:10.707 "method": "sock_impl_set_options", 00:23:10.707 "params": { 00:23:10.707 "impl_name": "posix", 00:23:10.707 "recv_buf_size": 2097152, 00:23:10.707 "send_buf_size": 2097152, 00:23:10.707 "enable_recv_pipe": true, 00:23:10.707 "enable_quickack": false, 00:23:10.707 "enable_placement_id": 0, 00:23:10.708 "enable_zerocopy_send_server": true, 00:23:10.708 "enable_zerocopy_send_client": false, 00:23:10.708 "zerocopy_threshold": 0, 00:23:10.708 "tls_version": 0, 00:23:10.708 "enable_ktls": false 00:23:10.708 } 00:23:10.708 } 00:23:10.708 ] 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "subsystem": "vmd", 00:23:10.708 "config": [] 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "subsystem": "accel", 00:23:10.708 "config": [ 00:23:10.708 { 00:23:10.708 "method": "accel_set_options", 00:23:10.708 "params": { 00:23:10.708 "small_cache_size": 128, 00:23:10.708 "large_cache_size": 16, 00:23:10.708 "task_count": 2048, 00:23:10.708 "sequence_count": 2048, 00:23:10.708 "buf_count": 2048 00:23:10.708 } 00:23:10.708 } 00:23:10.708 ] 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "subsystem": "bdev", 00:23:10.708 "config": [ 00:23:10.708 { 00:23:10.708 "method": "bdev_set_options", 00:23:10.708 "params": { 00:23:10.708 "bdev_io_pool_size": 65535, 00:23:10.708 "bdev_io_cache_size": 256, 00:23:10.708 "bdev_auto_examine": true, 00:23:10.708 "iobuf_small_cache_size": 128, 00:23:10.708 "iobuf_large_cache_size": 16 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "bdev_raid_set_options", 00:23:10.708 "params": { 00:23:10.708 "process_window_size_kb": 1024, 00:23:10.708 "process_max_bandwidth_mb_sec": 0 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "bdev_iscsi_set_options", 00:23:10.708 "params": { 00:23:10.708 "timeout_sec": 30 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "bdev_nvme_set_options", 00:23:10.708 "params": { 00:23:10.708 "action_on_timeout": "none", 00:23:10.708 "timeout_us": 0, 00:23:10.708 "timeout_admin_us": 0, 00:23:10.708 "keep_alive_timeout_ms": 10000, 00:23:10.708 "arbitration_burst": 0, 00:23:10.708 "low_priority_weight": 0, 00:23:10.708 "medium_priority_weight": 0, 00:23:10.708 "high_priority_weight": 0, 00:23:10.708 "nvme_adminq_poll_period_us": 10000, 00:23:10.708 "nvme_ioq_poll_period_us": 0, 00:23:10.708 "io_queue_requests": 0, 00:23:10.708 "delay_cmd_submit": true, 00:23:10.708 "transport_retry_count": 4, 00:23:10.708 "bdev_retry_count": 3, 00:23:10.708 "transport_ack_timeout": 0, 00:23:10.708 "ctrlr_loss_timeout_sec": 0, 00:23:10.708 "reconnect_delay_sec": 0, 00:23:10.708 "fast_io_fail_timeout_sec": 0, 00:23:10.708 "disable_auto_failback": false, 00:23:10.708 "generate_uuids": false, 00:23:10.708 "transport_tos": 0, 00:23:10.708 "nvme_error_stat": false, 00:23:10.708 "rdma_srq_size": 0, 00:23:10.708 "io_path_stat": false, 00:23:10.708 "allow_accel_sequence": false, 00:23:10.708 "rdma_max_cq_size": 0, 00:23:10.708 "rdma_cm_event_timeout_ms": 0, 00:23:10.708 "dhchap_digests": [ 00:23:10.708 "sha256", 00:23:10.708 "sha384", 00:23:10.708 "sha512" 00:23:10.708 ], 00:23:10.708 "dhchap_dhgroups": [ 00:23:10.708 "null", 00:23:10.708 "ffdhe2048", 00:23:10.708 "ffdhe3072", 00:23:10.708 "ffdhe4096", 00:23:10.708 "ffdhe6144", 00:23:10.708 "ffdhe8192" 00:23:10.708 ] 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "bdev_nvme_set_hotplug", 00:23:10.708 "params": { 00:23:10.708 "period_us": 100000, 00:23:10.708 "enable": false 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "bdev_malloc_create", 00:23:10.708 "params": { 00:23:10.708 "name": "malloc0", 00:23:10.708 "num_blocks": 8192, 00:23:10.708 "block_size": 4096, 00:23:10.708 "physical_block_size": 4096, 00:23:10.708 "uuid": "3aa42ffe-bfe9-4163-9b08-b46ea6e8b8c4", 00:23:10.708 "optimal_io_boundary": 0, 00:23:10.708 "md_size": 0, 00:23:10.708 "dif_type": 0, 00:23:10.708 "dif_is_head_of_md": false, 00:23:10.708 "dif_pi_format": 0 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "bdev_wait_for_examine" 00:23:10.708 } 00:23:10.708 ] 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "subsystem": "nbd", 00:23:10.708 "config": [] 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "subsystem": "scheduler", 00:23:10.708 "config": [ 00:23:10.708 { 00:23:10.708 "method": "framework_set_scheduler", 00:23:10.708 "params": { 00:23:10.708 "name": "static" 00:23:10.708 } 00:23:10.708 } 00:23:10.708 ] 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "subsystem": "nvmf", 00:23:10.708 "config": [ 00:23:10.708 { 00:23:10.708 "method": "nvmf_set_config", 00:23:10.708 "params": { 00:23:10.708 "discovery_filter": "match_any", 00:23:10.708 "admin_cmd_passthru": { 00:23:10.708 "identify_ctrlr": false 00:23:10.708 }, 00:23:10.708 "dhchap_digests": [ 00:23:10.708 "sha256", 00:23:10.708 "sha384", 00:23:10.708 "sha512" 00:23:10.708 ], 00:23:10.708 "dhchap_dhgroups": [ 00:23:10.708 "null", 00:23:10.708 "ffdhe2048", 00:23:10.708 "ffdhe3072", 00:23:10.708 "ffdhe4096", 00:23:10.708 "ffdhe6144", 00:23:10.708 "ffdhe8192" 00:23:10.708 ] 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "nvmf_set_max_subsystems", 00:23:10.708 "params": { 00:23:10.708 "max_subsystems": 1024 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "nvmf_set_crdt", 00:23:10.708 "params": { 00:23:10.708 "crdt1": 0, 00:23:10.708 "crdt2": 0, 00:23:10.708 "crdt3": 0 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "nvmf_create_transport", 00:23:10.708 "params": { 00:23:10.708 "trtype": "TCP", 00:23:10.708 "max_queue_depth": 128, 00:23:10.708 "max_io_qpairs_per_ctrlr": 127, 00:23:10.708 "in_capsule_data_size": 4096, 00:23:10.708 "max_io_size": 131072, 00:23:10.708 "io_unit_size": 131072, 00:23:10.708 "max_aq_depth": 128, 00:23:10.708 "num_shared_buffers": 511, 00:23:10.708 "buf_cache_size": 4294967295, 00:23:10.708 "dif_insert_or_strip": false, 00:23:10.708 "zcopy": false, 00:23:10.708 "c2h_success": false, 00:23:10.708 "sock_priority": 0, 00:23:10.708 "abort_timeout_sec": 1, 00:23:10.708 "ack_timeout": 0, 00:23:10.708 "data_wr_pool_size": 0 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "nvmf_create_subsystem", 00:23:10.708 "params": { 00:23:10.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.708 "allow_any_host": false, 00:23:10.708 "serial_number": "SPDK00000000000001", 00:23:10.708 "model_number": "SPDK bdev Controller", 00:23:10.708 "max_namespaces": 10, 00:23:10.708 "min_cntlid": 1, 00:23:10.708 "max_cntlid": 65519, 00:23:10.708 "ana_reporting": false 00:23:10.708 } 00:23:10.708 }, 00:23:10.708 { 00:23:10.708 "method": "nvmf_subsystem_add_host", 00:23:10.708 "params": { 00:23:10.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.708 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.708 "psk": "key0" 00:23:10.708 } 00:23:10.709 }, 00:23:10.709 { 00:23:10.709 "method": "nvmf_subsystem_add_ns", 00:23:10.709 "params": { 00:23:10.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.709 "namespace": { 00:23:10.709 "nsid": 1, 00:23:10.709 "bdev_name": "malloc0", 00:23:10.709 "nguid": "3AA42FFEBFE941639B08B46EA6E8B8C4", 00:23:10.709 "uuid": "3aa42ffe-bfe9-4163-9b08-b46ea6e8b8c4", 00:23:10.709 "no_auto_visible": false 00:23:10.709 } 00:23:10.709 } 00:23:10.709 }, 00:23:10.709 { 00:23:10.709 "method": "nvmf_subsystem_add_listener", 00:23:10.709 "params": { 00:23:10.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.709 "listen_address": { 00:23:10.709 "trtype": "TCP", 00:23:10.709 "adrfam": "IPv4", 00:23:10.709 "traddr": "10.0.0.2", 00:23:10.709 "trsvcid": "4420" 00:23:10.709 }, 00:23:10.709 "secure_channel": true 00:23:10.709 } 00:23:10.709 } 00:23:10.709 ] 00:23:10.709 } 00:23:10.709 ] 00:23:10.709 }' 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3003137 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3003137 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003137 ']' 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.709 09:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.709 [2024-11-17 09:23:15.514901] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:10.709 [2024-11-17 09:23:15.515064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.709 [2024-11-17 09:23:15.662274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.967 [2024-11-17 09:23:15.783438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.967 [2024-11-17 09:23:15.783523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.967 [2024-11-17 09:23:15.783544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.967 [2024-11-17 09:23:15.783565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.967 [2024-11-17 09:23:15.783582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.967 [2024-11-17 09:23:15.785108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.535 [2024-11-17 09:23:16.282992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.535 [2024-11-17 09:23:16.315037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.535 [2024-11-17 09:23:16.315380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3003292 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3003292 /var/tmp/bdevperf.sock 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003292 ']' 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.535 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:11.535 "subsystems": [ 00:23:11.535 { 00:23:11.535 "subsystem": "keyring", 00:23:11.535 "config": [ 00:23:11.535 { 00:23:11.535 "method": "keyring_file_add_key", 00:23:11.535 "params": { 00:23:11.535 "name": "key0", 00:23:11.535 "path": "/tmp/tmp.vZujivjOiX" 00:23:11.535 } 00:23:11.535 } 00:23:11.535 ] 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "subsystem": "iobuf", 00:23:11.535 "config": [ 00:23:11.535 { 00:23:11.535 "method": "iobuf_set_options", 00:23:11.535 "params": { 00:23:11.535 "small_pool_count": 8192, 00:23:11.535 "large_pool_count": 1024, 00:23:11.535 "small_bufsize": 8192, 00:23:11.535 "large_bufsize": 135168, 00:23:11.535 "enable_numa": false 00:23:11.535 } 00:23:11.535 } 00:23:11.535 ] 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "subsystem": "sock", 00:23:11.535 "config": [ 00:23:11.535 { 00:23:11.535 "method": "sock_set_default_impl", 00:23:11.535 "params": { 00:23:11.535 "impl_name": "posix" 00:23:11.535 } 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "method": "sock_impl_set_options", 00:23:11.535 "params": { 00:23:11.535 "impl_name": "ssl", 00:23:11.535 "recv_buf_size": 4096, 00:23:11.535 "send_buf_size": 4096, 00:23:11.535 "enable_recv_pipe": true, 00:23:11.535 "enable_quickack": false, 00:23:11.535 "enable_placement_id": 0, 00:23:11.535 "enable_zerocopy_send_server": true, 00:23:11.535 "enable_zerocopy_send_client": false, 00:23:11.535 "zerocopy_threshold": 0, 00:23:11.535 "tls_version": 0, 00:23:11.535 "enable_ktls": false 00:23:11.535 } 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "method": "sock_impl_set_options", 00:23:11.535 "params": { 00:23:11.535 "impl_name": "posix", 00:23:11.535 "recv_buf_size": 2097152, 00:23:11.535 "send_buf_size": 2097152, 00:23:11.535 "enable_recv_pipe": true, 00:23:11.535 "enable_quickack": false, 00:23:11.535 "enable_placement_id": 0, 00:23:11.535 "enable_zerocopy_send_server": true, 00:23:11.535 "enable_zerocopy_send_client": false, 00:23:11.535 "zerocopy_threshold": 0, 00:23:11.535 "tls_version": 0, 00:23:11.535 "enable_ktls": false 00:23:11.535 } 00:23:11.535 } 00:23:11.535 ] 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "subsystem": "vmd", 00:23:11.535 "config": [] 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "subsystem": "accel", 00:23:11.535 "config": [ 00:23:11.535 { 00:23:11.535 "method": "accel_set_options", 00:23:11.535 "params": { 00:23:11.535 "small_cache_size": 128, 00:23:11.535 "large_cache_size": 16, 00:23:11.535 "task_count": 2048, 00:23:11.535 "sequence_count": 2048, 00:23:11.535 "buf_count": 2048 00:23:11.535 } 00:23:11.535 } 00:23:11.535 ] 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "subsystem": "bdev", 00:23:11.535 "config": [ 00:23:11.535 { 00:23:11.535 "method": "bdev_set_options", 00:23:11.535 "params": { 00:23:11.535 "bdev_io_pool_size": 65535, 00:23:11.535 "bdev_io_cache_size": 256, 00:23:11.535 "bdev_auto_examine": true, 00:23:11.535 "iobuf_small_cache_size": 128, 00:23:11.535 "iobuf_large_cache_size": 16 00:23:11.535 } 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "method": "bdev_raid_set_options", 00:23:11.535 "params": { 00:23:11.535 "process_window_size_kb": 1024, 00:23:11.535 "process_max_bandwidth_mb_sec": 0 00:23:11.535 } 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "method": "bdev_iscsi_set_options", 00:23:11.535 "params": { 00:23:11.535 "timeout_sec": 30 00:23:11.535 } 00:23:11.535 }, 00:23:11.535 { 00:23:11.535 "method": "bdev_nvme_set_options", 00:23:11.535 "params": { 00:23:11.535 "action_on_timeout": "none", 00:23:11.535 "timeout_us": 0, 00:23:11.535 "timeout_admin_us": 0, 00:23:11.535 "keep_alive_timeout_ms": 10000, 00:23:11.535 "arbitration_burst": 0, 00:23:11.535 "low_priority_weight": 0, 00:23:11.535 "medium_priority_weight": 0, 00:23:11.535 "high_priority_weight": 0, 00:23:11.535 "nvme_adminq_poll_period_us": 10000, 00:23:11.535 "nvme_ioq_poll_period_us": 0, 00:23:11.535 "io_queue_requests": 512, 00:23:11.535 "delay_cmd_submit": true, 00:23:11.535 "transport_retry_count": 4, 00:23:11.535 "bdev_retry_count": 3, 00:23:11.535 "transport_ack_timeout": 0, 00:23:11.535 "ctrlr_loss_timeout_sec": 0, 00:23:11.535 "reconnect_delay_sec": 0, 00:23:11.536 "fast_io_fail_timeout_sec": 0, 00:23:11.536 "disable_auto_failback": false, 00:23:11.536 "generate_uuids": false, 00:23:11.536 "transport_tos": 0, 00:23:11.536 "nvme_error_stat": false, 00:23:11.536 "rdma_srq_size": 0, 00:23:11.536 "io_path_stat": false, 00:23:11.536 "allow_accel_sequence": false, 00:23:11.536 "rdma_max_cq_size": 0, 00:23:11.536 "rdma_cm_event_timeout_ms": 0, 00:23:11.536 "dhchap_digests": [ 00:23:11.536 "sha256", 00:23:11.536 "sha384", 00:23:11.536 "sha512" 00:23:11.536 ], 00:23:11.536 "dhchap_dhgroups": [ 00:23:11.536 "null", 00:23:11.536 "ffdhe2048", 00:23:11.536 "ffdhe3072", 00:23:11.536 "ffdhe4096", 00:23:11.536 "ffdhe6144", 00:23:11.536 "ffdhe8192" 00:23:11.536 ] 00:23:11.536 } 00:23:11.536 }, 00:23:11.536 { 00:23:11.536 "method": "bdev_nvme_attach_controller", 00:23:11.536 "params": { 00:23:11.536 "name": "TLSTEST", 00:23:11.536 "trtype": "TCP", 00:23:11.536 "adrfam": "IPv4", 00:23:11.536 "traddr": "10.0.0.2", 00:23:11.536 "trsvcid": "4420", 00:23:11.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.536 "prchk_reftag": false, 00:23:11.536 "prchk_guard": false, 00:23:11.536 "ctrlr_loss_timeout_sec": 0, 00:23:11.536 "reconnect_delay_sec": 0, 00:23:11.536 "fast_io_fail_timeout_sec": 0, 00:23:11.536 "psk": "key0", 00:23:11.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.536 "hdgst": false, 00:23:11.536 "ddgst": false, 00:23:11.536 "multipath": "multipath" 00:23:11.536 } 00:23:11.536 }, 00:23:11.536 { 00:23:11.536 "method": "bdev_nvme_set_hotplug", 00:23:11.536 "params": { 00:23:11.536 "period_us": 100000, 00:23:11.536 "enable": false 00:23:11.536 } 00:23:11.536 }, 00:23:11.536 { 00:23:11.536 "method": "bdev_wait_for_examine" 00:23:11.536 } 00:23:11.536 ] 00:23:11.536 }, 00:23:11.536 { 00:23:11.536 "subsystem": "nbd", 00:23:11.536 "config": [] 00:23:11.536 } 00:23:11.536 ] 00:23:11.536 }' 00:23:11.536 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.536 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.794 [2024-11-17 09:23:16.586611] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:11.794 [2024-11-17 09:23:16.586756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003292 ] 00:23:11.794 [2024-11-17 09:23:16.721589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.052 [2024-11-17 09:23:16.844253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.310 [2024-11-17 09:23:17.249054] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.568 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.568 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.568 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.826 Running I/O for 10 seconds... 00:23:14.696 2539.00 IOPS, 9.92 MiB/s [2024-11-17T08:23:21.083Z] 2556.00 IOPS, 9.98 MiB/s [2024-11-17T08:23:22.017Z] 2564.00 IOPS, 10.02 MiB/s [2024-11-17T08:23:22.953Z] 2567.50 IOPS, 10.03 MiB/s [2024-11-17T08:23:23.888Z] 2580.20 IOPS, 10.08 MiB/s [2024-11-17T08:23:24.822Z] 2582.17 IOPS, 10.09 MiB/s [2024-11-17T08:23:25.757Z] 2585.43 IOPS, 10.10 MiB/s [2024-11-17T08:23:27.132Z] 2588.62 IOPS, 10.11 MiB/s [2024-11-17T08:23:27.709Z] 2593.89 IOPS, 10.13 MiB/s [2024-11-17T08:23:27.975Z] 2595.40 IOPS, 10.14 MiB/s 00:23:22.962 Latency(us) 00:23:22.962 [2024-11-17T08:23:27.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.962 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.962 Verification LBA range: start 0x0 length 0x2000 00:23:22.962 TLSTESTn1 : 10.03 2600.63 10.16 0.00 0.00 49122.44 9514.86 53982.25 00:23:22.962 [2024-11-17T08:23:27.975Z] =================================================================================================================== 00:23:22.962 [2024-11-17T08:23:27.975Z] Total : 2600.63 10.16 0.00 0.00 49122.44 9514.86 53982.25 00:23:22.962 { 00:23:22.962 "results": [ 00:23:22.962 { 00:23:22.962 "job": "TLSTESTn1", 00:23:22.962 "core_mask": "0x4", 00:23:22.962 "workload": "verify", 00:23:22.962 "status": "finished", 00:23:22.962 "verify_range": { 00:23:22.962 "start": 0, 00:23:22.962 "length": 8192 00:23:22.962 }, 00:23:22.962 "queue_depth": 128, 00:23:22.962 "io_size": 4096, 00:23:22.962 "runtime": 10.027957, 00:23:22.962 "iops": 2600.6294203295847, 00:23:22.962 "mibps": 10.15870867316244, 00:23:22.962 "io_failed": 0, 00:23:22.962 "io_timeout": 0, 00:23:22.962 "avg_latency_us": 49122.4427039778, 00:23:22.962 "min_latency_us": 9514.856296296297, 00:23:22.962 "max_latency_us": 53982.24592592593 00:23:22.962 } 00:23:22.962 ], 00:23:22.962 "core_count": 1 00:23:22.962 } 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3003292 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003292 ']' 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003292 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003292 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003292' 00:23:22.962 killing process with pid 3003292 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003292 00:23:22.962 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.962 00:23:22.962 Latency(us) 00:23:22.962 [2024-11-17T08:23:27.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.962 [2024-11-17T08:23:27.975Z] =================================================================================================================== 00:23:22.962 [2024-11-17T08:23:27.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.962 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003292 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3003137 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003137 ']' 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003137 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003137 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003137' 00:23:23.895 killing process with pid 3003137 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003137 00:23:23.895 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003137 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3004872 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3004872 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004872 ']' 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.269 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.269 [2024-11-17 09:23:29.941323] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:25.269 [2024-11-17 09:23:29.941491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.269 [2024-11-17 09:23:30.114229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.269 [2024-11-17 09:23:30.252084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.269 [2024-11-17 09:23:30.252189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.269 [2024-11-17 09:23:30.252217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.269 [2024-11-17 09:23:30.252252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.269 [2024-11-17 09:23:30.252272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.269 [2024-11-17 09:23:30.253980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.vZujivjOiX 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vZujivjOiX 00:23:26.202 09:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.460 [2024-11-17 09:23:31.230309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.460 09:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.718 09:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:26.976 [2024-11-17 09:23:31.763856] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.976 [2024-11-17 09:23:31.764203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.977 09:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.234 malloc0 00:23:27.234 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.492 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:23:27.750 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3005292 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3005292 /var/tmp/bdevperf.sock 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005292 ']' 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.008 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.008 [2024-11-17 09:23:33.011089] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:28.008 [2024-11-17 09:23:33.011243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005292 ] 00:23:28.267 [2024-11-17 09:23:33.145146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.267 [2024-11-17 09:23:33.273385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.201 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.201 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.201 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:23:29.459 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:29.717 [2024-11-17 09:23:34.496724] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.717 nvme0n1 00:23:29.717 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.717 Running I/O for 1 seconds... 00:23:31.092 1924.00 IOPS, 7.52 MiB/s 00:23:31.092 Latency(us) 00:23:31.092 [2024-11-17T08:23:36.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.092 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.092 Verification LBA range: start 0x0 length 0x2000 00:23:31.092 nvme0n1 : 1.05 1951.51 7.62 0.00 0.00 64187.08 12136.30 48351.00 00:23:31.092 [2024-11-17T08:23:36.105Z] =================================================================================================================== 00:23:31.092 [2024-11-17T08:23:36.105Z] Total : 1951.51 7.62 0.00 0.00 64187.08 12136.30 48351.00 00:23:31.092 { 00:23:31.092 "results": [ 00:23:31.092 { 00:23:31.092 "job": "nvme0n1", 00:23:31.092 "core_mask": "0x2", 00:23:31.092 "workload": "verify", 00:23:31.092 "status": "finished", 00:23:31.092 "verify_range": { 00:23:31.092 "start": 0, 00:23:31.092 "length": 8192 00:23:31.092 }, 00:23:31.092 "queue_depth": 128, 00:23:31.092 "io_size": 4096, 00:23:31.092 "runtime": 1.052005, 00:23:31.092 "iops": 1951.5116373021042, 00:23:31.092 "mibps": 7.6230923332113445, 00:23:31.092 "io_failed": 0, 00:23:31.092 "io_timeout": 0, 00:23:31.092 "avg_latency_us": 64187.079928559826, 00:23:31.092 "min_latency_us": 12136.296296296296, 00:23:31.092 "max_latency_us": 48351.00444444444 00:23:31.092 } 00:23:31.092 ], 00:23:31.092 "core_count": 1 00:23:31.092 } 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3005292 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005292 ']' 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005292 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005292 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005292' 00:23:31.092 killing process with pid 3005292 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005292 00:23:31.092 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.092 00:23:31.092 Latency(us) 00:23:31.092 [2024-11-17T08:23:36.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.092 [2024-11-17T08:23:36.105Z] =================================================================================================================== 00:23:31.092 [2024-11-17T08:23:36.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.092 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005292 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3004872 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004872 ']' 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004872 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004872 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004872' 00:23:32.028 killing process with pid 3004872 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004872 00:23:32.028 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004872 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005838 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005838 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005838 ']' 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.964 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.222 [2024-11-17 09:23:38.050539] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:33.222 [2024-11-17 09:23:38.050695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.222 [2024-11-17 09:23:38.204462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.484 [2024-11-17 09:23:38.341344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.484 [2024-11-17 09:23:38.341445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.484 [2024-11-17 09:23:38.341472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.484 [2024-11-17 09:23:38.341497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.484 [2024-11-17 09:23:38.341517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.484 [2024-11-17 09:23:38.343119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.118 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.118 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.118 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.118 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.118 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.118 [2024-11-17 09:23:39.021647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.118 malloc0 00:23:34.118 [2024-11-17 09:23:39.082874] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.118 [2024-11-17 09:23:39.083278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3005993 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3005993 /var/tmp/bdevperf.sock 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3005993 ']' 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.118 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.375 [2024-11-17 09:23:39.195130] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:34.375 [2024-11-17 09:23:39.195274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005993 ] 00:23:34.375 [2024-11-17 09:23:39.331247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.633 [2024-11-17 09:23:39.463205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.212 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.212 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.212 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vZujivjOiX 00:23:35.476 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:35.734 [2024-11-17 09:23:40.698507] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.992 nvme0n1 00:23:35.992 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.992 Running I/O for 1 seconds... 00:23:37.185 2442.00 IOPS, 9.54 MiB/s 00:23:37.185 Latency(us) 00:23:37.185 [2024-11-17T08:23:42.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.185 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:37.185 Verification LBA range: start 0x0 length 0x2000 00:23:37.185 nvme0n1 : 1.04 2470.25 9.65 0.00 0.00 50976.79 8932.31 53593.88 00:23:37.185 [2024-11-17T08:23:42.198Z] =================================================================================================================== 00:23:37.185 [2024-11-17T08:23:42.198Z] Total : 2470.25 9.65 0.00 0.00 50976.79 8932.31 53593.88 00:23:37.185 { 00:23:37.185 "results": [ 00:23:37.185 { 00:23:37.185 "job": "nvme0n1", 00:23:37.185 "core_mask": "0x2", 00:23:37.185 "workload": "verify", 00:23:37.185 "status": "finished", 00:23:37.185 "verify_range": { 00:23:37.185 "start": 0, 00:23:37.185 "length": 8192 00:23:37.185 }, 00:23:37.185 "queue_depth": 128, 00:23:37.185 "io_size": 4096, 00:23:37.185 "runtime": 1.040382, 00:23:37.185 "iops": 2470.2465056104393, 00:23:37.185 "mibps": 9.649400412540778, 00:23:37.185 "io_failed": 0, 00:23:37.185 "io_timeout": 0, 00:23:37.185 "avg_latency_us": 50976.78895518087, 00:23:37.185 "min_latency_us": 8932.314074074075, 00:23:37.185 "max_latency_us": 53593.88444444445 00:23:37.185 } 00:23:37.185 ], 00:23:37.185 "core_count": 1 00:23:37.185 } 00:23:37.185 09:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:37.185 09:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.185 09:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.185 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.185 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:37.185 "subsystems": [ 00:23:37.185 { 00:23:37.185 "subsystem": "keyring", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "keyring_file_add_key", 00:23:37.185 "params": { 00:23:37.185 "name": "key0", 00:23:37.185 "path": "/tmp/tmp.vZujivjOiX" 00:23:37.185 } 00:23:37.185 } 00:23:37.185 ] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "iobuf", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "iobuf_set_options", 00:23:37.185 "params": { 00:23:37.185 "small_pool_count": 8192, 00:23:37.185 "large_pool_count": 1024, 00:23:37.185 "small_bufsize": 8192, 00:23:37.185 "large_bufsize": 135168, 00:23:37.185 "enable_numa": false 00:23:37.185 } 00:23:37.185 } 00:23:37.185 ] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "sock", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "sock_set_default_impl", 00:23:37.185 "params": { 00:23:37.185 "impl_name": "posix" 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "sock_impl_set_options", 00:23:37.185 "params": { 00:23:37.185 "impl_name": "ssl", 00:23:37.185 "recv_buf_size": 4096, 00:23:37.185 "send_buf_size": 4096, 00:23:37.185 "enable_recv_pipe": true, 00:23:37.185 "enable_quickack": false, 00:23:37.185 "enable_placement_id": 0, 00:23:37.185 "enable_zerocopy_send_server": true, 00:23:37.185 "enable_zerocopy_send_client": false, 00:23:37.185 "zerocopy_threshold": 0, 00:23:37.185 "tls_version": 0, 00:23:37.185 "enable_ktls": false 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "sock_impl_set_options", 00:23:37.185 "params": { 00:23:37.185 "impl_name": "posix", 00:23:37.185 "recv_buf_size": 2097152, 00:23:37.185 "send_buf_size": 2097152, 00:23:37.185 "enable_recv_pipe": true, 00:23:37.185 "enable_quickack": false, 00:23:37.185 "enable_placement_id": 0, 00:23:37.185 "enable_zerocopy_send_server": true, 00:23:37.185 "enable_zerocopy_send_client": false, 00:23:37.185 "zerocopy_threshold": 0, 00:23:37.185 "tls_version": 0, 00:23:37.185 "enable_ktls": false 00:23:37.185 } 00:23:37.185 } 00:23:37.185 ] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "vmd", 00:23:37.185 "config": [] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "accel", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "accel_set_options", 00:23:37.185 "params": { 00:23:37.185 "small_cache_size": 128, 00:23:37.185 "large_cache_size": 16, 00:23:37.185 "task_count": 2048, 00:23:37.185 "sequence_count": 2048, 00:23:37.185 "buf_count": 2048 00:23:37.185 } 00:23:37.185 } 00:23:37.185 ] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "bdev", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "bdev_set_options", 00:23:37.185 "params": { 00:23:37.185 "bdev_io_pool_size": 65535, 00:23:37.185 "bdev_io_cache_size": 256, 00:23:37.185 "bdev_auto_examine": true, 00:23:37.185 "iobuf_small_cache_size": 128, 00:23:37.185 "iobuf_large_cache_size": 16 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "bdev_raid_set_options", 00:23:37.185 "params": { 00:23:37.185 "process_window_size_kb": 1024, 00:23:37.185 "process_max_bandwidth_mb_sec": 0 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "bdev_iscsi_set_options", 00:23:37.185 "params": { 00:23:37.185 "timeout_sec": 30 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "bdev_nvme_set_options", 00:23:37.185 "params": { 00:23:37.185 "action_on_timeout": "none", 00:23:37.185 "timeout_us": 0, 00:23:37.185 "timeout_admin_us": 0, 00:23:37.185 "keep_alive_timeout_ms": 10000, 00:23:37.185 "arbitration_burst": 0, 00:23:37.185 "low_priority_weight": 0, 00:23:37.185 "medium_priority_weight": 0, 00:23:37.185 "high_priority_weight": 0, 00:23:37.185 "nvme_adminq_poll_period_us": 10000, 00:23:37.185 "nvme_ioq_poll_period_us": 0, 00:23:37.185 "io_queue_requests": 0, 00:23:37.185 "delay_cmd_submit": true, 00:23:37.185 "transport_retry_count": 4, 00:23:37.185 "bdev_retry_count": 3, 00:23:37.185 "transport_ack_timeout": 0, 00:23:37.185 "ctrlr_loss_timeout_sec": 0, 00:23:37.185 "reconnect_delay_sec": 0, 00:23:37.185 "fast_io_fail_timeout_sec": 0, 00:23:37.185 "disable_auto_failback": false, 00:23:37.185 "generate_uuids": false, 00:23:37.185 "transport_tos": 0, 00:23:37.185 "nvme_error_stat": false, 00:23:37.185 "rdma_srq_size": 0, 00:23:37.185 "io_path_stat": false, 00:23:37.185 "allow_accel_sequence": false, 00:23:37.185 "rdma_max_cq_size": 0, 00:23:37.185 "rdma_cm_event_timeout_ms": 0, 00:23:37.185 "dhchap_digests": [ 00:23:37.185 "sha256", 00:23:37.185 "sha384", 00:23:37.185 "sha512" 00:23:37.185 ], 00:23:37.185 "dhchap_dhgroups": [ 00:23:37.185 "null", 00:23:37.185 "ffdhe2048", 00:23:37.185 "ffdhe3072", 00:23:37.185 "ffdhe4096", 00:23:37.185 "ffdhe6144", 00:23:37.185 "ffdhe8192" 00:23:37.185 ] 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "bdev_nvme_set_hotplug", 00:23:37.185 "params": { 00:23:37.185 "period_us": 100000, 00:23:37.185 "enable": false 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "bdev_malloc_create", 00:23:37.185 "params": { 00:23:37.185 "name": "malloc0", 00:23:37.185 "num_blocks": 8192, 00:23:37.185 "block_size": 4096, 00:23:37.185 "physical_block_size": 4096, 00:23:37.185 "uuid": "1dad17d9-ff68-4f38-9c0d-e5e4132ddb56", 00:23:37.185 "optimal_io_boundary": 0, 00:23:37.185 "md_size": 0, 00:23:37.185 "dif_type": 0, 00:23:37.185 "dif_is_head_of_md": false, 00:23:37.185 "dif_pi_format": 0 00:23:37.185 } 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "method": "bdev_wait_for_examine" 00:23:37.185 } 00:23:37.185 ] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "nbd", 00:23:37.185 "config": [] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "scheduler", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "framework_set_scheduler", 00:23:37.185 "params": { 00:23:37.185 "name": "static" 00:23:37.185 } 00:23:37.185 } 00:23:37.185 ] 00:23:37.185 }, 00:23:37.185 { 00:23:37.185 "subsystem": "nvmf", 00:23:37.185 "config": [ 00:23:37.185 { 00:23:37.185 "method": "nvmf_set_config", 00:23:37.185 "params": { 00:23:37.185 "discovery_filter": "match_any", 00:23:37.185 "admin_cmd_passthru": { 00:23:37.185 "identify_ctrlr": false 00:23:37.186 }, 00:23:37.186 "dhchap_digests": [ 00:23:37.186 "sha256", 00:23:37.186 "sha384", 00:23:37.186 "sha512" 00:23:37.186 ], 00:23:37.186 "dhchap_dhgroups": [ 00:23:37.186 "null", 00:23:37.186 "ffdhe2048", 00:23:37.186 "ffdhe3072", 00:23:37.186 "ffdhe4096", 00:23:37.186 "ffdhe6144", 00:23:37.186 "ffdhe8192" 00:23:37.186 ] 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_set_max_subsystems", 00:23:37.186 "params": { 00:23:37.186 "max_subsystems": 1024 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_set_crdt", 00:23:37.186 "params": { 00:23:37.186 "crdt1": 0, 00:23:37.186 "crdt2": 0, 00:23:37.186 "crdt3": 0 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_create_transport", 00:23:37.186 "params": { 00:23:37.186 "trtype": "TCP", 00:23:37.186 "max_queue_depth": 128, 00:23:37.186 "max_io_qpairs_per_ctrlr": 127, 00:23:37.186 "in_capsule_data_size": 4096, 00:23:37.186 "max_io_size": 131072, 00:23:37.186 "io_unit_size": 131072, 00:23:37.186 "max_aq_depth": 128, 00:23:37.186 "num_shared_buffers": 511, 00:23:37.186 "buf_cache_size": 4294967295, 00:23:37.186 "dif_insert_or_strip": false, 00:23:37.186 "zcopy": false, 00:23:37.186 "c2h_success": false, 00:23:37.186 "sock_priority": 0, 00:23:37.186 "abort_timeout_sec": 1, 00:23:37.186 "ack_timeout": 0, 00:23:37.186 "data_wr_pool_size": 0 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_create_subsystem", 00:23:37.186 "params": { 00:23:37.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.186 "allow_any_host": false, 00:23:37.186 "serial_number": "00000000000000000000", 00:23:37.186 "model_number": "SPDK bdev Controller", 00:23:37.186 "max_namespaces": 32, 00:23:37.186 "min_cntlid": 1, 00:23:37.186 "max_cntlid": 65519, 00:23:37.186 "ana_reporting": false 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_subsystem_add_host", 00:23:37.186 "params": { 00:23:37.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.186 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.186 "psk": "key0" 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_subsystem_add_ns", 00:23:37.186 "params": { 00:23:37.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.186 "namespace": { 00:23:37.186 "nsid": 1, 00:23:37.186 "bdev_name": "malloc0", 00:23:37.186 "nguid": "1DAD17D9FF684F389C0DE5E4132DDB56", 00:23:37.186 "uuid": "1dad17d9-ff68-4f38-9c0d-e5e4132ddb56", 00:23:37.186 "no_auto_visible": false 00:23:37.186 } 00:23:37.186 } 00:23:37.186 }, 00:23:37.186 { 00:23:37.186 "method": "nvmf_subsystem_add_listener", 00:23:37.186 "params": { 00:23:37.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.186 "listen_address": { 00:23:37.186 "trtype": "TCP", 00:23:37.186 "adrfam": "IPv4", 00:23:37.186 "traddr": "10.0.0.2", 00:23:37.186 "trsvcid": "4420" 00:23:37.186 }, 00:23:37.186 "secure_channel": false, 00:23:37.186 "sock_impl": "ssl" 00:23:37.186 } 00:23:37.186 } 00:23:37.186 ] 00:23:37.186 } 00:23:37.186 ] 00:23:37.186 }' 00:23:37.186 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:37.444 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:37.444 "subsystems": [ 00:23:37.444 { 00:23:37.444 "subsystem": "keyring", 00:23:37.444 "config": [ 00:23:37.444 { 00:23:37.444 "method": "keyring_file_add_key", 00:23:37.444 "params": { 00:23:37.444 "name": "key0", 00:23:37.444 "path": "/tmp/tmp.vZujivjOiX" 00:23:37.444 } 00:23:37.444 } 00:23:37.444 ] 00:23:37.444 }, 00:23:37.444 { 00:23:37.444 "subsystem": "iobuf", 00:23:37.444 "config": [ 00:23:37.444 { 00:23:37.444 "method": "iobuf_set_options", 00:23:37.444 "params": { 00:23:37.444 "small_pool_count": 8192, 00:23:37.444 "large_pool_count": 1024, 00:23:37.444 "small_bufsize": 8192, 00:23:37.444 "large_bufsize": 135168, 00:23:37.444 "enable_numa": false 00:23:37.444 } 00:23:37.444 } 00:23:37.444 ] 00:23:37.444 }, 00:23:37.444 { 00:23:37.444 "subsystem": "sock", 00:23:37.444 "config": [ 00:23:37.444 { 00:23:37.444 "method": "sock_set_default_impl", 00:23:37.444 "params": { 00:23:37.444 "impl_name": "posix" 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "sock_impl_set_options", 00:23:37.445 "params": { 00:23:37.445 "impl_name": "ssl", 00:23:37.445 "recv_buf_size": 4096, 00:23:37.445 "send_buf_size": 4096, 00:23:37.445 "enable_recv_pipe": true, 00:23:37.445 "enable_quickack": false, 00:23:37.445 "enable_placement_id": 0, 00:23:37.445 "enable_zerocopy_send_server": true, 00:23:37.445 "enable_zerocopy_send_client": false, 00:23:37.445 "zerocopy_threshold": 0, 00:23:37.445 "tls_version": 0, 00:23:37.445 "enable_ktls": false 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "sock_impl_set_options", 00:23:37.445 "params": { 00:23:37.445 "impl_name": "posix", 00:23:37.445 "recv_buf_size": 2097152, 00:23:37.445 "send_buf_size": 2097152, 00:23:37.445 "enable_recv_pipe": true, 00:23:37.445 "enable_quickack": false, 00:23:37.445 "enable_placement_id": 0, 00:23:37.445 "enable_zerocopy_send_server": true, 00:23:37.445 "enable_zerocopy_send_client": false, 00:23:37.445 "zerocopy_threshold": 0, 00:23:37.445 "tls_version": 0, 00:23:37.445 "enable_ktls": false 00:23:37.445 } 00:23:37.445 } 00:23:37.445 ] 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "subsystem": "vmd", 00:23:37.445 "config": [] 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "subsystem": "accel", 00:23:37.445 "config": [ 00:23:37.445 { 00:23:37.445 "method": "accel_set_options", 00:23:37.445 "params": { 00:23:37.445 "small_cache_size": 128, 00:23:37.445 "large_cache_size": 16, 00:23:37.445 "task_count": 2048, 00:23:37.445 "sequence_count": 2048, 00:23:37.445 "buf_count": 2048 00:23:37.445 } 00:23:37.445 } 00:23:37.445 ] 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "subsystem": "bdev", 00:23:37.445 "config": [ 00:23:37.445 { 00:23:37.445 "method": "bdev_set_options", 00:23:37.445 "params": { 00:23:37.445 "bdev_io_pool_size": 65535, 00:23:37.445 "bdev_io_cache_size": 256, 00:23:37.445 "bdev_auto_examine": true, 00:23:37.445 "iobuf_small_cache_size": 128, 00:23:37.445 "iobuf_large_cache_size": 16 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_raid_set_options", 00:23:37.445 "params": { 00:23:37.445 "process_window_size_kb": 1024, 00:23:37.445 "process_max_bandwidth_mb_sec": 0 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_iscsi_set_options", 00:23:37.445 "params": { 00:23:37.445 "timeout_sec": 30 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_nvme_set_options", 00:23:37.445 "params": { 00:23:37.445 "action_on_timeout": "none", 00:23:37.445 "timeout_us": 0, 00:23:37.445 "timeout_admin_us": 0, 00:23:37.445 "keep_alive_timeout_ms": 10000, 00:23:37.445 "arbitration_burst": 0, 00:23:37.445 "low_priority_weight": 0, 00:23:37.445 "medium_priority_weight": 0, 00:23:37.445 "high_priority_weight": 0, 00:23:37.445 "nvme_adminq_poll_period_us": 10000, 00:23:37.445 "nvme_ioq_poll_period_us": 0, 00:23:37.445 "io_queue_requests": 512, 00:23:37.445 "delay_cmd_submit": true, 00:23:37.445 "transport_retry_count": 4, 00:23:37.445 "bdev_retry_count": 3, 00:23:37.445 "transport_ack_timeout": 0, 00:23:37.445 "ctrlr_loss_timeout_sec": 0, 00:23:37.445 "reconnect_delay_sec": 0, 00:23:37.445 "fast_io_fail_timeout_sec": 0, 00:23:37.445 "disable_auto_failback": false, 00:23:37.445 "generate_uuids": false, 00:23:37.445 "transport_tos": 0, 00:23:37.445 "nvme_error_stat": false, 00:23:37.445 "rdma_srq_size": 0, 00:23:37.445 "io_path_stat": false, 00:23:37.445 "allow_accel_sequence": false, 00:23:37.445 "rdma_max_cq_size": 0, 00:23:37.445 "rdma_cm_event_timeout_ms": 0, 00:23:37.445 "dhchap_digests": [ 00:23:37.445 "sha256", 00:23:37.445 "sha384", 00:23:37.445 "sha512" 00:23:37.445 ], 00:23:37.445 "dhchap_dhgroups": [ 00:23:37.445 "null", 00:23:37.445 "ffdhe2048", 00:23:37.445 "ffdhe3072", 00:23:37.445 "ffdhe4096", 00:23:37.445 "ffdhe6144", 00:23:37.445 "ffdhe8192" 00:23:37.445 ] 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_nvme_attach_controller", 00:23:37.445 "params": { 00:23:37.445 "name": "nvme0", 00:23:37.445 "trtype": "TCP", 00:23:37.445 "adrfam": "IPv4", 00:23:37.445 "traddr": "10.0.0.2", 00:23:37.445 "trsvcid": "4420", 00:23:37.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.445 "prchk_reftag": false, 00:23:37.445 "prchk_guard": false, 00:23:37.445 "ctrlr_loss_timeout_sec": 0, 00:23:37.445 "reconnect_delay_sec": 0, 00:23:37.445 "fast_io_fail_timeout_sec": 0, 00:23:37.445 "psk": "key0", 00:23:37.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.445 "hdgst": false, 00:23:37.445 "ddgst": false, 00:23:37.445 "multipath": "multipath" 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_nvme_set_hotplug", 00:23:37.445 "params": { 00:23:37.445 "period_us": 100000, 00:23:37.445 "enable": false 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_enable_histogram", 00:23:37.445 "params": { 00:23:37.445 "name": "nvme0n1", 00:23:37.445 "enable": true 00:23:37.445 } 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "method": "bdev_wait_for_examine" 00:23:37.445 } 00:23:37.445 ] 00:23:37.445 }, 00:23:37.445 { 00:23:37.445 "subsystem": "nbd", 00:23:37.445 "config": [] 00:23:37.445 } 00:23:37.445 ] 00:23:37.445 }' 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3005993 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005993 ']' 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005993 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005993 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005993' 00:23:37.445 killing process with pid 3005993 00:23:37.445 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005993 00:23:37.445 Received shutdown signal, test time was about 1.000000 seconds 00:23:37.445 00:23:37.445 Latency(us) 00:23:37.445 [2024-11-17T08:23:42.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.446 [2024-11-17T08:23:42.459Z] =================================================================================================================== 00:23:37.446 [2024-11-17T08:23:42.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.446 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005993 00:23:38.378 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3005838 00:23:38.378 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3005838 ']' 00:23:38.378 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3005838 00:23:38.378 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:38.378 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.378 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005838 00:23:38.636 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.636 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.636 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005838' 00:23:38.636 killing process with pid 3005838 00:23:38.636 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3005838 00:23:38.636 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3005838 00:23:39.571 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:39.571 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.571 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:39.571 "subsystems": [ 00:23:39.571 { 00:23:39.571 "subsystem": "keyring", 00:23:39.571 "config": [ 00:23:39.571 { 00:23:39.571 "method": "keyring_file_add_key", 00:23:39.571 "params": { 00:23:39.571 "name": "key0", 00:23:39.571 "path": "/tmp/tmp.vZujivjOiX" 00:23:39.571 } 00:23:39.571 } 00:23:39.571 ] 00:23:39.571 }, 00:23:39.571 { 00:23:39.571 "subsystem": "iobuf", 00:23:39.571 "config": [ 00:23:39.571 { 00:23:39.571 "method": "iobuf_set_options", 00:23:39.571 "params": { 00:23:39.571 "small_pool_count": 8192, 00:23:39.571 "large_pool_count": 1024, 00:23:39.571 "small_bufsize": 8192, 00:23:39.571 "large_bufsize": 135168, 00:23:39.571 "enable_numa": false 00:23:39.571 } 00:23:39.571 } 00:23:39.571 ] 00:23:39.571 }, 00:23:39.571 { 00:23:39.571 "subsystem": "sock", 00:23:39.571 "config": [ 00:23:39.571 { 00:23:39.571 "method": "sock_set_default_impl", 00:23:39.571 "params": { 00:23:39.571 "impl_name": "posix" 00:23:39.571 } 00:23:39.571 }, 00:23:39.571 { 00:23:39.571 "method": "sock_impl_set_options", 00:23:39.571 "params": { 00:23:39.571 "impl_name": "ssl", 00:23:39.571 "recv_buf_size": 4096, 00:23:39.571 "send_buf_size": 4096, 00:23:39.571 "enable_recv_pipe": true, 00:23:39.571 "enable_quickack": false, 00:23:39.571 "enable_placement_id": 0, 00:23:39.571 "enable_zerocopy_send_server": true, 00:23:39.571 "enable_zerocopy_send_client": false, 00:23:39.571 "zerocopy_threshold": 0, 00:23:39.571 "tls_version": 0, 00:23:39.571 "enable_ktls": false 00:23:39.571 } 00:23:39.571 }, 00:23:39.571 { 00:23:39.571 "method": "sock_impl_set_options", 00:23:39.571 "params": { 00:23:39.571 "impl_name": "posix", 00:23:39.571 "recv_buf_size": 2097152, 00:23:39.571 "send_buf_size": 2097152, 00:23:39.571 "enable_recv_pipe": true, 00:23:39.571 "enable_quickack": false, 00:23:39.571 "enable_placement_id": 0, 00:23:39.571 "enable_zerocopy_send_server": true, 00:23:39.571 "enable_zerocopy_send_client": false, 00:23:39.571 "zerocopy_threshold": 0, 00:23:39.571 "tls_version": 0, 00:23:39.571 "enable_ktls": false 00:23:39.572 } 00:23:39.572 } 00:23:39.572 ] 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "subsystem": "vmd", 00:23:39.572 "config": [] 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "subsystem": "accel", 00:23:39.572 "config": [ 00:23:39.572 { 00:23:39.572 "method": "accel_set_options", 00:23:39.572 "params": { 00:23:39.572 "small_cache_size": 128, 00:23:39.572 "large_cache_size": 16, 00:23:39.572 "task_count": 2048, 00:23:39.572 "sequence_count": 2048, 00:23:39.572 "buf_count": 2048 00:23:39.572 } 00:23:39.572 } 00:23:39.572 ] 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "subsystem": "bdev", 00:23:39.572 "config": [ 00:23:39.572 { 00:23:39.572 "method": "bdev_set_options", 00:23:39.572 "params": { 00:23:39.572 "bdev_io_pool_size": 65535, 00:23:39.572 "bdev_io_cache_size": 256, 00:23:39.572 "bdev_auto_examine": true, 00:23:39.572 "iobuf_small_cache_size": 128, 00:23:39.572 "iobuf_large_cache_size": 16 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "bdev_raid_set_options", 00:23:39.572 "params": { 00:23:39.572 "process_window_size_kb": 1024, 00:23:39.572 "process_max_bandwidth_mb_sec": 0 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "bdev_iscsi_set_options", 00:23:39.572 "params": { 00:23:39.572 "timeout_sec": 30 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "bdev_nvme_set_options", 00:23:39.572 "params": { 00:23:39.572 "action_on_timeout": "none", 00:23:39.572 "timeout_us": 0, 00:23:39.572 "timeout_admin_us": 0, 00:23:39.572 "keep_alive_timeout_ms": 10000, 00:23:39.572 "arbitration_burst": 0, 00:23:39.572 "low_priority_weight": 0, 00:23:39.572 "medium_priority_weight": 0, 00:23:39.572 "high_priority_weight": 0, 00:23:39.572 "nvme_adminq_poll_period_us": 10000, 00:23:39.572 "nvme_ioq_poll_period_us": 0, 00:23:39.572 "io_queue_requests": 0, 00:23:39.572 "delay_cmd_submit": true, 00:23:39.572 "transport_retry_count": 4, 00:23:39.572 "bdev_retry_count": 3, 00:23:39.572 "transport_ack_timeout": 0, 00:23:39.572 "ctrlr_loss_timeout_sec": 0, 00:23:39.572 "reconnect_delay_sec": 0, 00:23:39.572 "fast_io_fail_timeout_sec": 0, 00:23:39.572 "disable_auto_failback": false, 00:23:39.572 "generate_uuids": false, 00:23:39.572 "transport_tos": 0, 00:23:39.572 "nvme_error_stat": false, 00:23:39.572 "rdma_srq_size": 0, 00:23:39.572 "io_path_stat": false, 00:23:39.572 "allow_accel_sequence": false, 00:23:39.572 "rdma_max_cq_size": 0, 00:23:39.572 "rdma_cm_event_timeout_ms": 0, 00:23:39.572 "dhchap_digests": [ 00:23:39.572 "sha256", 00:23:39.572 "sha384", 00:23:39.572 "sha512" 00:23:39.572 ], 00:23:39.572 "dhchap_dhgroups": [ 00:23:39.572 "null", 00:23:39.572 "ffdhe2048", 00:23:39.572 "ffdhe3072", 00:23:39.572 "ffdhe4096", 00:23:39.572 "ffdhe6144", 00:23:39.572 "ffdhe8192" 00:23:39.572 ] 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "bdev_nvme_set_hotplug", 00:23:39.572 "params": { 00:23:39.572 "period_us": 100000, 00:23:39.572 "enable": false 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "bdev_malloc_create", 00:23:39.572 "params": { 00:23:39.572 "name": "malloc0", 00:23:39.572 "num_blocks": 8192, 00:23:39.572 "block_size": 4096, 00:23:39.572 "physical_block_size": 4096, 00:23:39.572 "uuid": "1dad17d9-ff68-4f38-9c0d-e5e4132ddb56", 00:23:39.572 "optimal_io_boundary": 0, 00:23:39.572 "md_size": 0, 00:23:39.572 "dif_type": 0, 00:23:39.572 "dif_is_head_of_md": false, 00:23:39.572 "dif_pi_format": 0 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "bdev_wait_for_examine" 00:23:39.572 } 00:23:39.572 ] 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "subsystem": "nbd", 00:23:39.572 "config": [] 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "subsystem": "scheduler", 00:23:39.572 "config": [ 00:23:39.572 { 00:23:39.572 "method": "framework_set_scheduler", 00:23:39.572 "params": { 00:23:39.572 "name": "static" 00:23:39.572 } 00:23:39.572 } 00:23:39.572 ] 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "subsystem": "nvmf", 00:23:39.572 "config": [ 00:23:39.572 { 00:23:39.572 "method": "nvmf_set_config", 00:23:39.572 "params": { 00:23:39.572 "discovery_filter": "match_any", 00:23:39.572 "admin_cmd_passthru": { 00:23:39.572 "identify_ctrlr": false 00:23:39.572 }, 00:23:39.572 "dhchap_digests": [ 00:23:39.572 "sha256", 00:23:39.572 "sha384", 00:23:39.572 "sha512" 00:23:39.572 ], 00:23:39.572 "dhchap_dhgroups": [ 00:23:39.572 "null", 00:23:39.572 "ffdhe2048", 00:23:39.572 "ffdhe3072", 00:23:39.572 "ffdhe4096", 00:23:39.572 "ffdhe6144", 00:23:39.572 "ffdhe8192" 00:23:39.572 ] 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_set_max_subsystems", 00:23:39.572 "params": { 00:23:39.572 "max_subsystems": 1024 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_set_crdt", 00:23:39.572 "params": { 00:23:39.572 "crdt1": 0, 00:23:39.572 "crdt2": 0, 00:23:39.572 "crdt3": 0 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_create_transport", 00:23:39.572 "params": { 00:23:39.572 "trtype": "TCP", 00:23:39.572 "max_queue_depth": 128, 00:23:39.572 "max_io_qpairs_per_ctrlr": 127, 00:23:39.572 "in_capsule_data_size": 4096, 00:23:39.572 "max_io_size": 131072, 00:23:39.572 "io_unit_size": 131072, 00:23:39.572 "max_aq_depth": 128, 00:23:39.572 "num_shared_buffers": 511, 00:23:39.572 "buf_cache_size": 4294967295, 00:23:39.572 "dif_insert_or_strip": false, 00:23:39.572 "zcopy": false, 00:23:39.572 "c2h_success": false, 00:23:39.572 "sock_priority": 0, 00:23:39.572 "abort_timeout_sec": 1, 00:23:39.572 "ack_timeout": 0, 00:23:39.572 "data_wr_pool_size": 0 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_create_subsystem", 00:23:39.572 "params": { 00:23:39.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.572 "allow_any_host": false, 00:23:39.572 "serial_number": "00000000000000000000", 00:23:39.572 "model_number": "SPDK bdev Controller", 00:23:39.572 "max_namespaces": 32, 00:23:39.572 "min_cntlid": 1, 00:23:39.572 "max_cntlid": 65519, 00:23:39.572 "ana_reporting": false 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_subsystem_add_host", 00:23:39.572 "params": { 00:23:39.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.572 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.572 "psk": "key0" 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_subsystem_add_ns", 00:23:39.572 "params": { 00:23:39.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.572 "namespace": { 00:23:39.572 "nsid": 1, 00:23:39.572 "bdev_name": "malloc0", 00:23:39.572 "nguid": "1DAD17D9FF684F389C0DE5E4132DDB56", 00:23:39.572 "uuid": "1dad17d9-ff68-4f38-9c0d-e5e4132ddb56", 00:23:39.572 "no_auto_visible": false 00:23:39.572 } 00:23:39.572 } 00:23:39.572 }, 00:23:39.572 { 00:23:39.572 "method": "nvmf_subsystem_add_listener", 00:23:39.572 "params": { 00:23:39.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.572 "listen_address": { 00:23:39.572 "trtype": "TCP", 00:23:39.572 "adrfam": "IPv4", 00:23:39.572 "traddr": "10.0.0.2", 00:23:39.572 "trsvcid": "4420" 00:23:39.572 }, 00:23:39.572 "secure_channel": false, 00:23:39.572 "sock_impl": "ssl" 00:23:39.572 } 00:23:39.572 } 00:23:39.572 ] 00:23:39.572 } 00:23:39.572 ] 00:23:39.572 }' 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3006679 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3006679 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006679 ']' 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.572 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.831 [2024-11-17 09:23:44.651010] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:39.831 [2024-11-17 09:23:44.651168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.831 [2024-11-17 09:23:44.802686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.089 [2024-11-17 09:23:44.938970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.089 [2024-11-17 09:23:44.939058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.089 [2024-11-17 09:23:44.939085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.089 [2024-11-17 09:23:44.939114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.089 [2024-11-17 09:23:44.939134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.090 [2024-11-17 09:23:44.940842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.657 [2024-11-17 09:23:45.491472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.657 [2024-11-17 09:23:45.523512] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.657 [2024-11-17 09:23:45.523844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.657 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.657 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.657 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.657 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.657 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3006831 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3006831 /var/tmp/bdevperf.sock 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3006831 ']' 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:40.915 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.916 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.916 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:40.916 "subsystems": [ 00:23:40.916 { 00:23:40.916 "subsystem": "keyring", 00:23:40.916 "config": [ 00:23:40.916 { 00:23:40.916 "method": "keyring_file_add_key", 00:23:40.916 "params": { 00:23:40.916 "name": "key0", 00:23:40.916 "path": "/tmp/tmp.vZujivjOiX" 00:23:40.916 } 00:23:40.916 } 00:23:40.916 ] 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "subsystem": "iobuf", 00:23:40.916 "config": [ 00:23:40.916 { 00:23:40.916 "method": "iobuf_set_options", 00:23:40.916 "params": { 00:23:40.916 "small_pool_count": 8192, 00:23:40.916 "large_pool_count": 1024, 00:23:40.916 "small_bufsize": 8192, 00:23:40.916 "large_bufsize": 135168, 00:23:40.916 "enable_numa": false 00:23:40.916 } 00:23:40.916 } 00:23:40.916 ] 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "subsystem": "sock", 00:23:40.916 "config": [ 00:23:40.916 { 00:23:40.916 "method": "sock_set_default_impl", 00:23:40.916 "params": { 00:23:40.916 "impl_name": "posix" 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "sock_impl_set_options", 00:23:40.916 "params": { 00:23:40.916 "impl_name": "ssl", 00:23:40.916 "recv_buf_size": 4096, 00:23:40.916 "send_buf_size": 4096, 00:23:40.916 "enable_recv_pipe": true, 00:23:40.916 "enable_quickack": false, 00:23:40.916 "enable_placement_id": 0, 00:23:40.916 "enable_zerocopy_send_server": true, 00:23:40.916 "enable_zerocopy_send_client": false, 00:23:40.916 "zerocopy_threshold": 0, 00:23:40.916 "tls_version": 0, 00:23:40.916 "enable_ktls": false 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "sock_impl_set_options", 00:23:40.916 "params": { 00:23:40.916 "impl_name": "posix", 00:23:40.916 "recv_buf_size": 2097152, 00:23:40.916 "send_buf_size": 2097152, 00:23:40.916 "enable_recv_pipe": true, 00:23:40.916 "enable_quickack": false, 00:23:40.916 "enable_placement_id": 0, 00:23:40.916 "enable_zerocopy_send_server": true, 00:23:40.916 "enable_zerocopy_send_client": false, 00:23:40.916 "zerocopy_threshold": 0, 00:23:40.916 "tls_version": 0, 00:23:40.916 "enable_ktls": false 00:23:40.916 } 00:23:40.916 } 00:23:40.916 ] 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "subsystem": "vmd", 00:23:40.916 "config": [] 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "subsystem": "accel", 00:23:40.916 "config": [ 00:23:40.916 { 00:23:40.916 "method": "accel_set_options", 00:23:40.916 "params": { 00:23:40.916 "small_cache_size": 128, 00:23:40.916 "large_cache_size": 16, 00:23:40.916 "task_count": 2048, 00:23:40.916 "sequence_count": 2048, 00:23:40.916 "buf_count": 2048 00:23:40.916 } 00:23:40.916 } 00:23:40.916 ] 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "subsystem": "bdev", 00:23:40.916 "config": [ 00:23:40.916 { 00:23:40.916 "method": "bdev_set_options", 00:23:40.916 "params": { 00:23:40.916 "bdev_io_pool_size": 65535, 00:23:40.916 "bdev_io_cache_size": 256, 00:23:40.916 "bdev_auto_examine": true, 00:23:40.916 "iobuf_small_cache_size": 128, 00:23:40.916 "iobuf_large_cache_size": 16 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_raid_set_options", 00:23:40.916 "params": { 00:23:40.916 "process_window_size_kb": 1024, 00:23:40.916 "process_max_bandwidth_mb_sec": 0 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_iscsi_set_options", 00:23:40.916 "params": { 00:23:40.916 "timeout_sec": 30 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_nvme_set_options", 00:23:40.916 "params": { 00:23:40.916 "action_on_timeout": "none", 00:23:40.916 "timeout_us": 0, 00:23:40.916 "timeout_admin_us": 0, 00:23:40.916 "keep_alive_timeout_ms": 10000, 00:23:40.916 "arbitration_burst": 0, 00:23:40.916 "low_priority_weight": 0, 00:23:40.916 "medium_priority_weight": 0, 00:23:40.916 "high_priority_weight": 0, 00:23:40.916 "nvme_adminq_poll_period_us": 10000, 00:23:40.916 "nvme_ioq_poll_period_us": 0, 00:23:40.916 "io_queue_requests": 512, 00:23:40.916 "delay_cmd_submit": true, 00:23:40.916 "transport_retry_count": 4, 00:23:40.916 "bdev_retry_count": 3, 00:23:40.916 "transport_ack_timeout": 0, 00:23:40.916 "ctrlr_loss_timeout_sec": 0, 00:23:40.916 "reconnect_delay_sec": 0, 00:23:40.916 "fast_io_fail_timeout_sec": 0, 00:23:40.916 "disable_auto_failback": false, 00:23:40.916 "generate_uuids": false, 00:23:40.916 "transport_tos": 0, 00:23:40.916 "nvme_error_stat": false, 00:23:40.916 "rdma_srq_size": 0, 00:23:40.916 "io_path_stat": false, 00:23:40.916 "allow_accel_sequence": false, 00:23:40.916 "rdma_max_cq_size": 0, 00:23:40.916 "rdma_cm_event_timeout_ms": 0, 00:23:40.916 "dhchap_digests": [ 00:23:40.916 "sha256", 00:23:40.916 "sha384", 00:23:40.916 "sha512" 00:23:40.916 ], 00:23:40.916 "dhchap_dhgroups": [ 00:23:40.916 "null", 00:23:40.916 "ffdhe2048", 00:23:40.916 "ffdhe3072", 00:23:40.916 "ffdhe4096", 00:23:40.916 "ffdhe6144", 00:23:40.916 "ffdhe8192" 00:23:40.916 ] 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_nvme_attach_controller", 00:23:40.916 "params": { 00:23:40.916 "name": "nvme0", 00:23:40.916 "trtype": "TCP", 00:23:40.916 "adrfam": "IPv4", 00:23:40.916 "traddr": "10.0.0.2", 00:23:40.916 "trsvcid": "4420", 00:23:40.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.916 "prchk_reftag": false, 00:23:40.916 "prchk_guard": false, 00:23:40.916 "ctrlr_loss_timeout_sec": 0, 00:23:40.916 "reconnect_delay_sec": 0, 00:23:40.916 "fast_io_fail_timeout_sec": 0, 00:23:40.916 "psk": "key0", 00:23:40.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.916 "hdgst": false, 00:23:40.916 "ddgst": false, 00:23:40.916 "multipath": "multipath" 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_nvme_set_hotplug", 00:23:40.916 "params": { 00:23:40.916 "period_us": 100000, 00:23:40.916 "enable": false 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_enable_histogram", 00:23:40.916 "params": { 00:23:40.916 "name": "nvme0n1", 00:23:40.916 "enable": true 00:23:40.916 } 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "method": "bdev_wait_for_examine" 00:23:40.916 } 00:23:40.916 ] 00:23:40.916 }, 00:23:40.916 { 00:23:40.916 "subsystem": "nbd", 00:23:40.916 "config": [] 00:23:40.916 } 00:23:40.916 ] 00:23:40.916 }' 00:23:40.916 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.916 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.916 [2024-11-17 09:23:45.761502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:40.916 [2024-11-17 09:23:45.761656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006831 ] 00:23:40.916 [2024-11-17 09:23:45.896743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.175 [2024-11-17 09:23:46.020362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.433 [2024-11-17 09:23:46.433507] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.000 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.000 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.000 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.000 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:42.000 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.000 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.259 Running I/O for 1 seconds... 00:23:43.193 2284.00 IOPS, 8.92 MiB/s 00:23:43.193 Latency(us) 00:23:43.193 [2024-11-17T08:23:48.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.193 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:43.193 Verification LBA range: start 0x0 length 0x2000 00:23:43.193 nvme0n1 : 1.03 2346.74 9.17 0.00 0.00 53944.56 11408.12 53205.52 00:23:43.193 [2024-11-17T08:23:48.206Z] =================================================================================================================== 00:23:43.193 [2024-11-17T08:23:48.206Z] Total : 2346.74 9.17 0.00 0.00 53944.56 11408.12 53205.52 00:23:43.193 { 00:23:43.193 "results": [ 00:23:43.193 { 00:23:43.193 "job": "nvme0n1", 00:23:43.193 "core_mask": "0x2", 00:23:43.193 "workload": "verify", 00:23:43.193 "status": "finished", 00:23:43.193 "verify_range": { 00:23:43.193 "start": 0, 00:23:43.193 "length": 8192 00:23:43.193 }, 00:23:43.193 "queue_depth": 128, 00:23:43.193 "io_size": 4096, 00:23:43.193 "runtime": 1.027809, 00:23:43.193 "iops": 2346.7395206696965, 00:23:43.193 "mibps": 9.166951252616002, 00:23:43.193 "io_failed": 0, 00:23:43.193 "io_timeout": 0, 00:23:43.193 "avg_latency_us": 53944.55900251826, 00:23:43.193 "min_latency_us": 11408.118518518519, 00:23:43.193 "max_latency_us": 53205.52296296296 00:23:43.193 } 00:23:43.193 ], 00:23:43.193 "core_count": 1 00:23:43.193 } 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:43.193 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:43.193 nvmf_trace.0 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3006831 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006831 ']' 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006831 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006831 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006831' 00:23:43.452 killing process with pid 3006831 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006831 00:23:43.452 Received shutdown signal, test time was about 1.000000 seconds 00:23:43.452 00:23:43.452 Latency(us) 00:23:43.452 [2024-11-17T08:23:48.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.452 [2024-11-17T08:23:48.465Z] =================================================================================================================== 00:23:43.452 [2024-11-17T08:23:48.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.452 09:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006831 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.387 rmmod nvme_tcp 00:23:44.387 rmmod nvme_fabrics 00:23:44.387 rmmod nvme_keyring 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3006679 ']' 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3006679 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3006679 ']' 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3006679 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006679 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006679' 00:23:44.387 killing process with pid 3006679 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3006679 00:23:44.387 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3006679 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.762 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.667 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.667 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.58BPkqZ8HK /tmp/tmp.cmCuwfiIJG /tmp/tmp.vZujivjOiX 00:23:47.667 00:23:47.667 real 1m52.612s 00:23:47.667 user 3m8.919s 00:23:47.667 sys 0m26.360s 00:23:47.667 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.667 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.667 ************************************ 00:23:47.667 END TEST nvmf_tls 00:23:47.667 ************************************ 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:47.926 ************************************ 00:23:47.926 START TEST nvmf_fips 00:23:47.926 ************************************ 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:47.926 * Looking for test storage... 00:23:47.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.926 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.927 --rc genhtml_branch_coverage=1 00:23:47.927 --rc genhtml_function_coverage=1 00:23:47.927 --rc genhtml_legend=1 00:23:47.927 --rc geninfo_all_blocks=1 00:23:47.927 --rc geninfo_unexecuted_blocks=1 00:23:47.927 00:23:47.927 ' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.927 --rc genhtml_branch_coverage=1 00:23:47.927 --rc genhtml_function_coverage=1 00:23:47.927 --rc genhtml_legend=1 00:23:47.927 --rc geninfo_all_blocks=1 00:23:47.927 --rc geninfo_unexecuted_blocks=1 00:23:47.927 00:23:47.927 ' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.927 --rc genhtml_branch_coverage=1 00:23:47.927 --rc genhtml_function_coverage=1 00:23:47.927 --rc genhtml_legend=1 00:23:47.927 --rc geninfo_all_blocks=1 00:23:47.927 --rc geninfo_unexecuted_blocks=1 00:23:47.927 00:23:47.927 ' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.927 --rc genhtml_branch_coverage=1 00:23:47.927 --rc genhtml_function_coverage=1 00:23:47.927 --rc genhtml_legend=1 00:23:47.927 --rc geninfo_all_blocks=1 00:23:47.927 --rc geninfo_unexecuted_blocks=1 00:23:47.927 00:23:47.927 ' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:47.927 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:47.928 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:48.187 Error setting digest 00:23:48.187 400294DBAA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:48.187 400294DBAA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.187 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.188 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.188 09:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.188 09:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.188 09:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.088 09:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.088 09:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.088 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:50.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:50.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:50.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:50.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.089 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.347 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:23:50.348 00:23:50.348 --- 10.0.0.2 ping statistics --- 00:23:50.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.348 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:23:50.348 00:23:50.348 --- 10.0.0.1 ping statistics --- 00:23:50.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.348 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3009332 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3009332 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3009332 ']' 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.348 09:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.348 [2024-11-17 09:23:55.293826] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:50.348 [2024-11-17 09:23:55.293974] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.606 [2024-11-17 09:23:55.434287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.606 [2024-11-17 09:23:55.574607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.606 [2024-11-17 09:23:55.574701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.606 [2024-11-17 09:23:55.574732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.606 [2024-11-17 09:23:55.574758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.606 [2024-11-17 09:23:55.574779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.606 [2024-11-17 09:23:55.576427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.P0k 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.P0k 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.P0k 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.P0k 00:23:51.542 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.542 [2024-11-17 09:23:56.539452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.799 [2024-11-17 09:23:56.555377] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.799 [2024-11-17 09:23:56.555728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.799 malloc0 00:23:51.799 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.799 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3009499 00:23:51.799 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.799 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3009499 /var/tmp/bdevperf.sock 00:23:51.799 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3009499 ']' 00:23:51.799 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.800 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.800 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.800 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.800 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.800 [2024-11-17 09:23:56.772342] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:51.800 [2024-11-17 09:23:56.772513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009499 ] 00:23:52.057 [2024-11-17 09:23:56.922066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.057 [2024-11-17 09:23:57.066605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.991 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.991 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:52.991 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.P0k 00:23:52.991 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.248 [2024-11-17 09:23:58.237748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.506 TLSTESTn1 00:23:53.506 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.506 Running I/O for 10 seconds... 00:23:55.441 2687.00 IOPS, 10.50 MiB/s [2024-11-17T08:24:01.828Z] 2710.50 IOPS, 10.59 MiB/s [2024-11-17T08:24:02.762Z] 2718.67 IOPS, 10.62 MiB/s [2024-11-17T08:24:03.696Z] 2728.00 IOPS, 10.66 MiB/s [2024-11-17T08:24:04.628Z] 2737.20 IOPS, 10.69 MiB/s [2024-11-17T08:24:05.561Z] 2738.00 IOPS, 10.70 MiB/s [2024-11-17T08:24:06.557Z] 2741.00 IOPS, 10.71 MiB/s [2024-11-17T08:24:07.491Z] 2743.12 IOPS, 10.72 MiB/s [2024-11-17T08:24:08.865Z] 2743.56 IOPS, 10.72 MiB/s [2024-11-17T08:24:08.865Z] 2744.60 IOPS, 10.72 MiB/s 00:24:03.852 Latency(us) 00:24:03.852 [2024-11-17T08:24:08.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.852 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.852 Verification LBA range: start 0x0 length 0x2000 00:24:03.852 TLSTESTn1 : 10.03 2749.71 10.74 0.00 0.00 46458.66 8398.32 41554.68 00:24:03.852 [2024-11-17T08:24:08.865Z] =================================================================================================================== 00:24:03.852 [2024-11-17T08:24:08.865Z] Total : 2749.71 10.74 0.00 0.00 46458.66 8398.32 41554.68 00:24:03.852 { 00:24:03.852 "results": [ 00:24:03.852 { 00:24:03.852 "job": "TLSTESTn1", 00:24:03.852 "core_mask": "0x4", 00:24:03.852 "workload": "verify", 00:24:03.852 "status": "finished", 00:24:03.852 "verify_range": { 00:24:03.852 "start": 0, 00:24:03.852 "length": 8192 00:24:03.852 }, 00:24:03.852 "queue_depth": 128, 00:24:03.852 "io_size": 4096, 00:24:03.852 "runtime": 10.027241, 00:24:03.852 "iops": 2749.709516306629, 00:24:03.852 "mibps": 10.74105279807277, 00:24:03.852 "io_failed": 0, 00:24:03.852 "io_timeout": 0, 00:24:03.852 "avg_latency_us": 46458.66217214458, 00:24:03.852 "min_latency_us": 8398.317037037037, 00:24:03.852 "max_latency_us": 41554.67851851852 00:24:03.852 } 00:24:03.852 ], 00:24:03.852 "core_count": 1 00:24:03.852 } 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.853 nvmf_trace.0 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3009499 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3009499 ']' 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3009499 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009499 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009499' 00:24:03.853 killing process with pid 3009499 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3009499 00:24:03.853 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.853 00:24:03.853 Latency(us) 00:24:03.853 [2024-11-17T08:24:08.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.853 [2024-11-17T08:24:08.866Z] =================================================================================================================== 00:24:03.853 [2024-11-17T08:24:08.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.853 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3009499 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.786 rmmod nvme_tcp 00:24:04.786 rmmod nvme_fabrics 00:24:04.786 rmmod nvme_keyring 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3009332 ']' 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3009332 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3009332 ']' 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3009332 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009332 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009332' 00:24:04.786 killing process with pid 3009332 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3009332 00:24:04.786 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3009332 00:24:06.162 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.162 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.162 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.162 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:06.162 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:06.162 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.163 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.163 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.163 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.163 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.163 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.163 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.P0k 00:24:08.068 00:24:08.068 real 0m20.193s 00:24:08.068 user 0m27.940s 00:24:08.068 sys 0m5.139s 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.068 ************************************ 00:24:08.068 END TEST nvmf_fips 00:24:08.068 ************************************ 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.068 ************************************ 00:24:08.068 START TEST nvmf_control_msg_list 00:24:08.068 ************************************ 00:24:08.068 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:08.068 * Looking for test storage... 00:24:08.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:08.068 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.068 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.068 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.327 --rc genhtml_branch_coverage=1 00:24:08.327 --rc genhtml_function_coverage=1 00:24:08.327 --rc genhtml_legend=1 00:24:08.327 --rc geninfo_all_blocks=1 00:24:08.327 --rc geninfo_unexecuted_blocks=1 00:24:08.327 00:24:08.327 ' 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.327 --rc genhtml_branch_coverage=1 00:24:08.327 --rc genhtml_function_coverage=1 00:24:08.327 --rc genhtml_legend=1 00:24:08.327 --rc geninfo_all_blocks=1 00:24:08.327 --rc geninfo_unexecuted_blocks=1 00:24:08.327 00:24:08.327 ' 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.327 --rc genhtml_branch_coverage=1 00:24:08.327 --rc genhtml_function_coverage=1 00:24:08.327 --rc genhtml_legend=1 00:24:08.327 --rc geninfo_all_blocks=1 00:24:08.327 --rc geninfo_unexecuted_blocks=1 00:24:08.327 00:24:08.327 ' 00:24:08.327 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.328 --rc genhtml_branch_coverage=1 00:24:08.328 --rc genhtml_function_coverage=1 00:24:08.328 --rc genhtml_legend=1 00:24:08.328 --rc geninfo_all_blocks=1 00:24:08.328 --rc geninfo_unexecuted_blocks=1 00:24:08.328 00:24:08.328 ' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.328 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.233 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:10.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:10.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.234 09:24:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:10.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:10.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:24:10.234 00:24:10.234 --- 10.0.0.2 ping statistics --- 00:24:10.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.234 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:24:10.234 00:24:10.234 --- 10.0.0.1 ping statistics --- 00:24:10.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.234 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3013754 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3013754 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3013754 ']' 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.234 09:24:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:10.494 [2024-11-17 09:24:15.254614] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:10.494 [2024-11-17 09:24:15.254752] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.494 [2024-11-17 09:24:15.428751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.752 [2024-11-17 09:24:15.565648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.752 [2024-11-17 09:24:15.565733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.752 [2024-11-17 09:24:15.565753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.752 [2024-11-17 09:24:15.565773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.752 [2024-11-17 09:24:15.565789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.752 [2024-11-17 09:24:15.567126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.319 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.319 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.320 [2024-11-17 09:24:16.307917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.320 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.578 Malloc0 00:24:11.578 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.578 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:11.578 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.578 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.578 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.578 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:11.579 [2024-11-17 09:24:16.374228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3013912 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3013913 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3013914 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3013912 00:24:11.579 09:24:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.579 [2024-11-17 09:24:16.505086] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:11.579 [2024-11-17 09:24:16.505569] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:11.579 [2024-11-17 09:24:16.506046] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:12.954 Initializing NVMe Controllers 00:24:12.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:12.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:12.954 Initialization complete. Launching workers. 00:24:12.954 ======================================================== 00:24:12.954 Latency(us) 00:24:12.954 Device Information : IOPS MiB/s Average min max 00:24:12.954 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40928.77 40632.71 41919.57 00:24:12.954 ======================================================== 00:24:12.954 Total : 25.00 0.10 40928.77 40632.71 41919.57 00:24:12.954 00:24:12.954 Initializing NVMe Controllers 00:24:12.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:12.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:12.954 Initialization complete. Launching workers. 00:24:12.954 ======================================================== 00:24:12.954 Latency(us) 00:24:12.954 Device Information : IOPS MiB/s Average min max 00:24:12.954 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41348.16 40848.59 42107.59 00:24:12.954 ======================================================== 00:24:12.954 Total : 25.00 0.10 41348.16 40848.59 42107.59 00:24:12.954 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3013913 00:24:12.954 Initializing NVMe Controllers 00:24:12.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:12.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:12.954 Initialization complete. Launching workers. 00:24:12.954 ======================================================== 00:24:12.954 Latency(us) 00:24:12.954 Device Information : IOPS MiB/s Average min max 00:24:12.954 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40878.21 40355.12 41047.50 00:24:12.954 ======================================================== 00:24:12.954 Total : 25.00 0.10 40878.21 40355.12 41047.50 00:24:12.954 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3013914 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:12.954 rmmod nvme_tcp 00:24:12.954 rmmod nvme_fabrics 00:24:12.954 rmmod nvme_keyring 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3013754 ']' 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3013754 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3013754 ']' 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3013754 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3013754 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3013754' 00:24:12.954 killing process with pid 3013754 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3013754 00:24:12.954 09:24:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3013754 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.333 09:24:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.236 00:24:16.236 real 0m8.205s 00:24:16.236 user 0m8.162s 00:24:16.236 sys 0m2.659s 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:16.236 ************************************ 00:24:16.236 END TEST nvmf_control_msg_list 00:24:16.236 ************************************ 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.236 ************************************ 00:24:16.236 START TEST nvmf_wait_for_buf 00:24:16.236 ************************************ 00:24:16.236 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:16.495 * Looking for test storage... 00:24:16.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.495 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.496 --rc genhtml_branch_coverage=1 00:24:16.496 --rc genhtml_function_coverage=1 00:24:16.496 --rc genhtml_legend=1 00:24:16.496 --rc geninfo_all_blocks=1 00:24:16.496 --rc geninfo_unexecuted_blocks=1 00:24:16.496 00:24:16.496 ' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.496 --rc genhtml_branch_coverage=1 00:24:16.496 --rc genhtml_function_coverage=1 00:24:16.496 --rc genhtml_legend=1 00:24:16.496 --rc geninfo_all_blocks=1 00:24:16.496 --rc geninfo_unexecuted_blocks=1 00:24:16.496 00:24:16.496 ' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.496 --rc genhtml_branch_coverage=1 00:24:16.496 --rc genhtml_function_coverage=1 00:24:16.496 --rc genhtml_legend=1 00:24:16.496 --rc geninfo_all_blocks=1 00:24:16.496 --rc geninfo_unexecuted_blocks=1 00:24:16.496 00:24:16.496 ' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.496 --rc genhtml_branch_coverage=1 00:24:16.496 --rc genhtml_function_coverage=1 00:24:16.496 --rc genhtml_legend=1 00:24:16.496 --rc geninfo_all_blocks=1 00:24:16.496 --rc geninfo_unexecuted_blocks=1 00:24:16.496 00:24:16.496 ' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.496 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.497 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:19.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:19.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:19.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.029 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:19.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:24:19.030 00:24:19.030 --- 10.0.0.2 ping statistics --- 00:24:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.030 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:24:19.030 00:24:19.030 --- 10.0.0.1 ping statistics --- 00:24:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.030 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3016124 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3016124 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3016124 ']' 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.030 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.030 [2024-11-17 09:24:23.724326] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:19.030 [2024-11-17 09:24:23.724494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.030 [2024-11-17 09:24:23.878873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.030 [2024-11-17 09:24:24.017684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.030 [2024-11-17 09:24:24.017778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.030 [2024-11-17 09:24:24.017804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.030 [2024-11-17 09:24:24.017828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.030 [2024-11-17 09:24:24.017848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.030 [2024-11-17 09:24:24.019502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.965 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:20.223 Malloc0 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:20.223 [2024-11-17 09:24:25.075529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:20.223 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:20.224 [2024-11-17 09:24:25.099802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.224 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.482 [2024-11-17 09:24:25.267293] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.857 Initializing NVMe Controllers 00:24:21.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:21.857 Initialization complete. Launching workers. 00:24:21.857 ======================================================== 00:24:21.857 Latency(us) 00:24:21.857 Device Information : IOPS MiB/s Average min max 00:24:21.857 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 114.00 14.25 36494.70 7883.39 71841.42 00:24:21.857 ======================================================== 00:24:21.857 Total : 114.00 14.25 36494.70 7883.39 71841.42 00:24:21.857 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1798 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1798 -eq 0 ]] 00:24:21.857 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:22.116 rmmod nvme_tcp 00:24:22.116 rmmod nvme_fabrics 00:24:22.116 rmmod nvme_keyring 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3016124 ']' 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3016124 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3016124 ']' 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3016124 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016124 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016124' 00:24:22.116 killing process with pid 3016124 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3016124 00:24:22.116 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3016124 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.052 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.589 00:24:25.589 real 0m8.893s 00:24:25.589 user 0m5.419s 00:24:25.589 sys 0m2.257s 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:25.589 ************************************ 00:24:25.589 END TEST nvmf_wait_for_buf 00:24:25.589 ************************************ 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:25.589 ************************************ 00:24:25.589 START TEST nvmf_fuzz 00:24:25.589 ************************************ 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:25.589 * Looking for test storage... 00:24:25.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.589 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.590 --rc genhtml_branch_coverage=1 00:24:25.590 --rc genhtml_function_coverage=1 00:24:25.590 --rc genhtml_legend=1 00:24:25.590 --rc geninfo_all_blocks=1 00:24:25.590 --rc geninfo_unexecuted_blocks=1 00:24:25.590 00:24:25.590 ' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.590 --rc genhtml_branch_coverage=1 00:24:25.590 --rc genhtml_function_coverage=1 00:24:25.590 --rc genhtml_legend=1 00:24:25.590 --rc geninfo_all_blocks=1 00:24:25.590 --rc geninfo_unexecuted_blocks=1 00:24:25.590 00:24:25.590 ' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.590 --rc genhtml_branch_coverage=1 00:24:25.590 --rc genhtml_function_coverage=1 00:24:25.590 --rc genhtml_legend=1 00:24:25.590 --rc geninfo_all_blocks=1 00:24:25.590 --rc geninfo_unexecuted_blocks=1 00:24:25.590 00:24:25.590 ' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.590 --rc genhtml_branch_coverage=1 00:24:25.590 --rc genhtml_function_coverage=1 00:24:25.590 --rc genhtml_legend=1 00:24:25.590 --rc geninfo_all_blocks=1 00:24:25.590 --rc geninfo_unexecuted_blocks=1 00:24:25.590 00:24:25.590 ' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.590 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:27.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:27.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:27.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.497 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:27.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:24:27.498 00:24:27.498 --- 10.0.0.2 ping statistics --- 00:24:27.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.498 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:24:27.498 00:24:27.498 --- 10.0.0.1 ping statistics --- 00:24:27.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.498 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3018601 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3018601 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3018601 ']' 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.498 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.873 Malloc0 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:28.873 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:00.997 Fuzzing completed. Shutting down the fuzz application 00:25:00.997 00:25:00.997 Dumping successful admin opcodes: 00:25:00.997 8, 9, 10, 24, 00:25:00.997 Dumping successful io opcodes: 00:25:00.997 0, 9, 00:25:00.997 NS: 0x2000008efec0 I/O qp, Total commands completed: 316803, total successful commands: 1866, random_seed: 1969001728 00:25:00.997 NS: 0x2000008efec0 admin qp, Total commands completed: 39920, total successful commands: 326, random_seed: 3707556736 00:25:00.997 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:01.932 Fuzzing completed. Shutting down the fuzz application 00:25:01.932 00:25:01.932 Dumping successful admin opcodes: 00:25:01.932 24, 00:25:01.932 Dumping successful io opcodes: 00:25:01.932 00:25:01.932 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1010713584 00:25:01.932 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1011009946 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:01.932 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.933 rmmod nvme_tcp 00:25:01.933 rmmod nvme_fabrics 00:25:01.933 rmmod nvme_keyring 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3018601 ']' 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3018601 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3018601 ']' 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3018601 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3018601 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3018601' 00:25:01.933 killing process with pid 3018601 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3018601 00:25:01.933 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3018601 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.307 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:05.209 00:25:05.209 real 0m40.053s 00:25:05.209 user 0m57.429s 00:25:05.209 sys 0m13.443s 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.209 ************************************ 00:25:05.209 END TEST nvmf_fuzz 00:25:05.209 ************************************ 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.209 09:25:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:05.469 ************************************ 00:25:05.469 START TEST nvmf_multiconnection 00:25:05.469 ************************************ 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:05.469 * Looking for test storage... 00:25:05.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.469 --rc genhtml_branch_coverage=1 00:25:05.469 --rc genhtml_function_coverage=1 00:25:05.469 --rc genhtml_legend=1 00:25:05.469 --rc geninfo_all_blocks=1 00:25:05.469 --rc geninfo_unexecuted_blocks=1 00:25:05.469 00:25:05.469 ' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.469 --rc genhtml_branch_coverage=1 00:25:05.469 --rc genhtml_function_coverage=1 00:25:05.469 --rc genhtml_legend=1 00:25:05.469 --rc geninfo_all_blocks=1 00:25:05.469 --rc geninfo_unexecuted_blocks=1 00:25:05.469 00:25:05.469 ' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.469 --rc genhtml_branch_coverage=1 00:25:05.469 --rc genhtml_function_coverage=1 00:25:05.469 --rc genhtml_legend=1 00:25:05.469 --rc geninfo_all_blocks=1 00:25:05.469 --rc geninfo_unexecuted_blocks=1 00:25:05.469 00:25:05.469 ' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.469 --rc genhtml_branch_coverage=1 00:25:05.469 --rc genhtml_function_coverage=1 00:25:05.469 --rc genhtml_legend=1 00:25:05.469 --rc geninfo_all_blocks=1 00:25:05.469 --rc geninfo_unexecuted_blocks=1 00:25:05.469 00:25:05.469 ' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.469 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.470 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:08.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:08.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.004 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:08.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:08.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:25:08.005 00:25:08.005 --- 10.0.0.2 ping statistics --- 00:25:08.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.005 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:25:08.005 00:25:08.005 --- 10.0.0.1 ping statistics --- 00:25:08.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.005 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3024599 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3024599 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3024599 ']' 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.005 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.005 [2024-11-17 09:25:12.731179] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:08.005 [2024-11-17 09:25:12.731326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.005 [2024-11-17 09:25:12.873852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.005 [2024-11-17 09:25:13.008126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.005 [2024-11-17 09:25:13.008219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.005 [2024-11-17 09:25:13.008246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.005 [2024-11-17 09:25:13.008271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.005 [2024-11-17 09:25:13.008293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.005 [2024-11-17 09:25:13.011178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.005 [2024-11-17 09:25:13.011227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.005 [2024-11-17 09:25:13.011286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.005 [2024-11-17 09:25:13.011293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 [2024-11-17 09:25:13.752230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 Malloc1 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.942 [2024-11-17 09:25:13.880132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.942 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 Malloc2 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 Malloc3 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 Malloc4 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.201 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.460 Malloc5 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.460 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 Malloc6 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 Malloc7 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.461 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 Malloc8 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 Malloc9 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.720 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 Malloc10 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 Malloc11 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:09.978 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.979 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:10.912 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:10.912 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:10.912 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.912 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:10.912 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.810 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:13.374 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:13.374 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:13.374 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.374 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:13.374 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.290 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:15.857 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:15.857 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:15.857 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.857 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:15.857 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.387 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:18.645 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:18.645 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:18.645 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.645 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:18.645 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.170 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:21.428 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:21.428 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:21.428 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.428 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:21.428 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:23.326 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:23.326 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:23.327 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:23.327 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:23.327 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.327 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:23.327 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.327 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:24.261 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:24.261 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:24.261 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.261 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:24.261 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.159 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:27.093 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:27.093 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:27.093 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.093 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:27.093 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.991 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:29.977 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:29.977 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:29.977 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.977 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:29.977 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:31.900 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:31.900 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:31.900 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:31.900 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:31.900 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.900 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:31.901 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.901 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:32.834 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:32.834 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.834 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.834 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.834 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.733 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:35.667 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:35.667 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.667 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.667 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.667 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.566 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:38.500 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:38.500 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.500 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.500 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.500 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.027 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:41.027 [global] 00:25:41.027 thread=1 00:25:41.027 invalidate=1 00:25:41.027 rw=read 00:25:41.027 time_based=1 00:25:41.027 runtime=10 00:25:41.027 ioengine=libaio 00:25:41.027 direct=1 00:25:41.027 bs=262144 00:25:41.027 iodepth=64 00:25:41.027 norandommap=1 00:25:41.027 numjobs=1 00:25:41.027 00:25:41.027 [job0] 00:25:41.027 filename=/dev/nvme0n1 00:25:41.027 [job1] 00:25:41.027 filename=/dev/nvme10n1 00:25:41.027 [job2] 00:25:41.027 filename=/dev/nvme1n1 00:25:41.027 [job3] 00:25:41.027 filename=/dev/nvme2n1 00:25:41.027 [job4] 00:25:41.027 filename=/dev/nvme3n1 00:25:41.027 [job5] 00:25:41.027 filename=/dev/nvme4n1 00:25:41.027 [job6] 00:25:41.027 filename=/dev/nvme5n1 00:25:41.027 [job7] 00:25:41.027 filename=/dev/nvme6n1 00:25:41.027 [job8] 00:25:41.027 filename=/dev/nvme7n1 00:25:41.027 [job9] 00:25:41.027 filename=/dev/nvme8n1 00:25:41.027 [job10] 00:25:41.027 filename=/dev/nvme9n1 00:25:41.027 Could not set queue depth (nvme0n1) 00:25:41.027 Could not set queue depth (nvme10n1) 00:25:41.027 Could not set queue depth (nvme1n1) 00:25:41.027 Could not set queue depth (nvme2n1) 00:25:41.027 Could not set queue depth (nvme3n1) 00:25:41.027 Could not set queue depth (nvme4n1) 00:25:41.027 Could not set queue depth (nvme5n1) 00:25:41.027 Could not set queue depth (nvme6n1) 00:25:41.027 Could not set queue depth (nvme7n1) 00:25:41.027 Could not set queue depth (nvme8n1) 00:25:41.027 Could not set queue depth (nvme9n1) 00:25:41.027 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:41.027 fio-3.35 00:25:41.027 Starting 11 threads 00:25:53.228 00:25:53.228 job0: (groupid=0, jobs=1): err= 0: pid=3028976: Sun Nov 17 09:25:56 2024 00:25:53.228 read: IOPS=292, BW=73.2MiB/s (76.7MB/s)(743MiB/10151msec) 00:25:53.228 slat (usec): min=9, max=466940, avg=2148.40, stdev=13924.90 00:25:53.228 clat (msec): min=36, max=1082, avg=216.34, stdev=133.28 00:25:53.228 lat (msec): min=36, max=1232, avg=218.49, stdev=134.92 00:25:53.228 clat percentiles (msec): 00:25:53.228 | 1.00th=[ 44], 5.00th=[ 86], 10.00th=[ 109], 20.00th=[ 146], 00:25:53.228 | 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 188], 60.00th=[ 218], 00:25:53.228 | 70.00th=[ 257], 80.00th=[ 271], 90.00th=[ 309], 95.00th=[ 380], 00:25:53.228 | 99.00th=[ 969], 99.50th=[ 986], 99.90th=[ 995], 99.95th=[ 995], 00:25:53.228 | 99.99th=[ 1083] 00:25:53.228 bw ( KiB/s): min= 6656, max=112128, per=10.63%, avg=74439.15, stdev=25930.52, samples=20 00:25:53.228 iops : min= 26, max= 438, avg=290.75, stdev=101.31, samples=20 00:25:53.228 lat (msec) : 50=2.02%, 100=5.65%, 250=59.47%, 500=30.29%, 750=0.44% 00:25:53.228 lat (msec) : 1000=2.09%, 2000=0.03% 00:25:53.228 cpu : usr=0.10%, sys=0.80%, ctx=515, majf=0, minf=4097 00:25:53.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:53.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.228 issued rwts: total=2971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.228 job1: (groupid=0, jobs=1): err= 0: pid=3028977: Sun Nov 17 09:25:56 2024 00:25:53.228 read: IOPS=119, BW=29.8MiB/s (31.3MB/s)(304MiB/10180msec) 00:25:53.228 slat (usec): min=10, max=373375, avg=7679.11, stdev=30674.50 00:25:53.228 clat (msec): min=35, max=866, avg=528.12, stdev=155.27 00:25:53.228 lat (msec): min=35, max=954, avg=535.80, stdev=158.84 00:25:53.228 clat percentiles (msec): 00:25:53.228 | 1.00th=[ 38], 5.00th=[ 188], 10.00th=[ 347], 20.00th=[ 430], 00:25:53.228 | 30.00th=[ 477], 40.00th=[ 514], 50.00th=[ 550], 60.00th=[ 592], 00:25:53.228 | 70.00th=[ 609], 80.00th=[ 651], 90.00th=[ 693], 95.00th=[ 726], 00:25:53.228 | 99.00th=[ 810], 99.50th=[ 852], 99.90th=[ 869], 99.95th=[ 869], 00:25:53.228 | 99.99th=[ 869] 00:25:53.228 bw ( KiB/s): min=17408, max=40960, per=4.21%, avg=29462.05, stdev=6909.05, samples=20 00:25:53.228 iops : min= 68, max= 160, avg=115.05, stdev=26.96, samples=20 00:25:53.228 lat (msec) : 50=2.47%, 250=2.96%, 500=29.63%, 750=60.82%, 1000=4.12% 00:25:53.228 cpu : usr=0.07%, sys=0.44%, ctx=144, majf=0, minf=4097 00:25:53.228 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:25:53.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.228 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.228 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.228 job2: (groupid=0, jobs=1): err= 0: pid=3028987: Sun Nov 17 09:25:56 2024 00:25:53.228 read: IOPS=116, BW=29.0MiB/s (30.4MB/s)(295MiB/10178msec) 00:25:53.228 slat (usec): min=12, max=355630, avg=8461.53, stdev=29887.16 00:25:53.228 clat (msec): min=177, max=914, avg=542.62, stdev=109.78 00:25:53.228 lat (msec): min=177, max=914, avg=551.08, stdev=112.51 00:25:53.229 clat percentiles (msec): 00:25:53.229 | 1.00th=[ 347], 5.00th=[ 393], 10.00th=[ 422], 20.00th=[ 456], 00:25:53.229 | 30.00th=[ 477], 40.00th=[ 506], 50.00th=[ 531], 60.00th=[ 558], 00:25:53.229 | 70.00th=[ 584], 80.00th=[ 625], 90.00th=[ 684], 95.00th=[ 768], 00:25:53.229 | 99.00th=[ 818], 99.50th=[ 860], 99.90th=[ 860], 99.95th=[ 911], 00:25:53.229 | 99.99th=[ 911] 00:25:53.229 bw ( KiB/s): min=14336, max=38400, per=4.09%, avg=28617.55, stdev=5633.24, samples=20 00:25:53.229 iops : min= 56, max= 150, avg=111.75, stdev=21.98, samples=20 00:25:53.229 lat (msec) : 250=0.85%, 500=38.10%, 750=55.97%, 1000=5.08% 00:25:53.229 cpu : usr=0.07%, sys=0.47%, ctx=131, majf=0, minf=4097 00:25:53.229 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:25:53.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.229 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.229 issued rwts: total=1181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.229 job3: (groupid=0, jobs=1): err= 0: pid=3028990: Sun Nov 17 09:25:56 2024 00:25:53.229 read: IOPS=128, BW=32.2MiB/s (33.8MB/s)(328MiB/10178msec) 00:25:53.229 slat (usec): min=8, max=358578, avg=6079.43, stdev=26521.39 00:25:53.229 clat (msec): min=108, max=838, avg=490.02, stdev=132.92 00:25:53.229 lat (msec): min=109, max=839, avg=496.10, stdev=135.69 00:25:53.229 clat percentiles (msec): 00:25:53.229 | 1.00th=[ 111], 5.00th=[ 243], 10.00th=[ 321], 20.00th=[ 393], 00:25:53.229 | 30.00th=[ 422], 40.00th=[ 468], 50.00th=[ 510], 60.00th=[ 531], 00:25:53.229 | 70.00th=[ 567], 80.00th=[ 600], 90.00th=[ 651], 95.00th=[ 684], 00:25:53.229 | 99.00th=[ 735], 99.50th=[ 751], 99.90th=[ 785], 99.95th=[ 835], 00:25:53.229 | 99.99th=[ 835] 00:25:53.229 bw ( KiB/s): min=19968, max=61440, per=4.57%, avg=31971.15, stdev=9884.05, samples=20 00:25:53.229 iops : min= 78, max= 240, avg=124.85, stdev=38.61, samples=20 00:25:53.229 lat (msec) : 250=5.95%, 500=39.63%, 750=53.58%, 1000=0.84% 00:25:53.229 cpu : usr=0.04%, sys=0.38%, ctx=152, majf=0, minf=4098 00:25:53.229 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:25:53.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.229 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.229 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.229 job4: (groupid=0, jobs=1): err= 0: pid=3028991: Sun Nov 17 09:25:56 2024 00:25:53.229 read: IOPS=99, BW=24.9MiB/s (26.1MB/s)(254MiB/10184msec) 00:25:53.229 slat (usec): min=14, max=284703, avg=9903.92, stdev=33162.99 00:25:53.229 clat (msec): min=124, max=1051, avg=631.73, stdev=167.44 00:25:53.229 lat (msec): min=219, max=1051, avg=641.64, stdev=169.62 00:25:53.229 clat percentiles (msec): 00:25:53.229 | 1.00th=[ 243], 5.00th=[ 284], 10.00th=[ 464], 20.00th=[ 506], 00:25:53.229 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 617], 60.00th=[ 634], 00:25:53.229 | 70.00th=[ 709], 80.00th=[ 802], 90.00th=[ 860], 95.00th=[ 911], 00:25:53.229 | 99.00th=[ 969], 99.50th=[ 995], 99.90th=[ 1053], 99.95th=[ 1053], 00:25:53.229 | 99.99th=[ 1053] 00:25:53.229 bw ( KiB/s): min=13312, max=33792, per=3.48%, avg=24343.60, stdev=5628.68, samples=20 00:25:53.229 iops : min= 52, max= 132, avg=95.05, stdev=22.02, samples=20 00:25:53.229 lat (msec) : 250=1.77%, 500=17.24%, 750=56.06%, 1000=24.43%, 2000=0.49% 00:25:53.229 cpu : usr=0.00%, sys=0.47%, ctx=130, majf=0, minf=4097 00:25:53.229 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:25:53.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.229 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.229 issued rwts: total=1015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.229 job5: (groupid=0, jobs=1): err= 0: pid=3029001: Sun Nov 17 09:25:56 2024 00:25:53.229 read: IOPS=553, BW=138MiB/s (145MB/s)(1401MiB/10118msec) 00:25:53.229 slat (usec): min=12, max=156883, avg=1757.84, stdev=8373.15 00:25:53.229 clat (usec): min=1699, max=536968, avg=113753.17, stdev=106266.82 00:25:53.229 lat (msec): min=2, max=548, avg=115.51, stdev=107.87 00:25:53.229 clat percentiles (msec): 00:25:53.229 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 48], 00:25:53.229 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 63], 00:25:53.229 | 70.00th=[ 115], 80.00th=[ 188], 90.00th=[ 288], 95.00th=[ 351], 00:25:53.229 | 99.00th=[ 464], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 527], 00:25:53.229 | 99.99th=[ 542] 00:25:53.229 bw ( KiB/s): min=28160, max=343040, per=20.25%, avg=141744.15, stdev=112752.77, samples=20 00:25:53.229 iops : min= 110, max= 1340, avg=553.65, stdev=440.44, samples=20 00:25:53.229 lat (msec) : 2=0.02%, 4=0.11%, 10=0.43%, 20=0.66%, 50=24.37% 00:25:53.229 lat (msec) : 100=41.56%, 250=19.05%, 500=13.35%, 750=0.46% 00:25:53.229 cpu : usr=0.27%, sys=1.76%, ctx=671, majf=0, minf=3721 00:25:53.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:53.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.229 issued rwts: total=5602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.229 job6: (groupid=0, jobs=1): err= 0: pid=3029004: Sun Nov 17 09:25:56 2024 00:25:53.229 read: IOPS=167, BW=41.8MiB/s (43.9MB/s)(426MiB/10181msec) 00:25:53.229 slat (usec): min=8, max=437576, avg=5597.45, stdev=25173.51 00:25:53.229 clat (msec): min=38, max=1076, avg=376.53, stdev=251.07 00:25:53.229 lat (msec): min=38, max=1076, avg=382.13, stdev=254.88 00:25:53.229 clat percentiles (msec): 00:25:53.229 | 1.00th=[ 42], 5.00th=[ 78], 10.00th=[ 107], 20.00th=[ 126], 00:25:53.229 | 30.00th=[ 144], 40.00th=[ 201], 50.00th=[ 317], 60.00th=[ 485], 00:25:53.229 | 70.00th=[ 567], 80.00th=[ 634], 90.00th=[ 709], 95.00th=[ 785], 00:25:53.229 | 99.00th=[ 969], 99.50th=[ 1003], 99.90th=[ 1083], 99.95th=[ 1083], 00:25:53.229 | 99.99th=[ 1083] 00:25:53.229 bw ( KiB/s): min=15360, max=139264, per=6.00%, avg=41980.75, stdev=34558.02, samples=20 00:25:53.229 iops : min= 60, max= 544, avg=163.95, stdev=135.00, samples=20 00:25:53.229 lat (msec) : 50=2.76%, 100=5.34%, 250=35.80%, 500=19.48%, 750=29.23% 00:25:53.229 lat (msec) : 1000=6.87%, 2000=0.53% 00:25:53.229 cpu : usr=0.10%, sys=0.52%, ctx=216, majf=0, minf=4097 00:25:53.229 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:25:53.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.229 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.229 issued rwts: total=1704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.229 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.229 job7: (groupid=0, jobs=1): err= 0: pid=3029005: Sun Nov 17 09:25:56 2024 00:25:53.229 read: IOPS=394, BW=98.7MiB/s (104MB/s)(1002MiB/10151msec) 00:25:53.229 slat (usec): min=13, max=170772, avg=2442.04, stdev=9329.99 00:25:53.229 clat (msec): min=10, max=478, avg=159.48, stdev=96.65 00:25:53.229 lat (msec): min=10, max=478, avg=161.92, stdev=98.12 00:25:53.229 clat percentiles (msec): 00:25:53.229 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 50], 20.00th=[ 56], 00:25:53.229 | 30.00th=[ 86], 40.00th=[ 124], 50.00th=[ 155], 60.00th=[ 167], 00:25:53.229 | 70.00th=[ 211], 80.00th=[ 262], 90.00th=[ 292], 95.00th=[ 326], 00:25:53.229 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 477], 99.95th=[ 477], 00:25:53.229 | 99.99th=[ 477] 00:25:53.230 bw ( KiB/s): min=50688, max=271872, per=14.42%, avg=100985.60, stdev=60307.19, samples=20 00:25:53.230 iops : min= 198, max= 1062, avg=394.45, stdev=235.59, samples=20 00:25:53.230 lat (msec) : 20=0.17%, 50=10.78%, 100=22.90%, 250=42.35%, 500=23.80% 00:25:53.230 cpu : usr=0.26%, sys=1.54%, ctx=596, majf=0, minf=4097 00:25:53.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:53.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.230 issued rwts: total=4009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.230 job8: (groupid=0, jobs=1): err= 0: pid=3029006: Sun Nov 17 09:25:56 2024 00:25:53.230 read: IOPS=184, BW=46.2MiB/s (48.5MB/s)(471MiB/10180msec) 00:25:53.230 slat (usec): min=9, max=340316, avg=3119.71, stdev=18890.66 00:25:53.230 clat (msec): min=39, max=966, avg=342.71, stdev=189.38 00:25:53.230 lat (msec): min=39, max=966, avg=345.83, stdev=192.04 00:25:53.230 clat percentiles (msec): 00:25:53.230 | 1.00th=[ 70], 5.00th=[ 111], 10.00th=[ 150], 20.00th=[ 186], 00:25:53.230 | 30.00th=[ 207], 40.00th=[ 236], 50.00th=[ 275], 60.00th=[ 342], 00:25:53.230 | 70.00th=[ 409], 80.00th=[ 550], 90.00th=[ 642], 95.00th=[ 701], 00:25:53.230 | 99.00th=[ 844], 99.50th=[ 877], 99.90th=[ 969], 99.95th=[ 969], 00:25:53.230 | 99.99th=[ 969] 00:25:53.230 bw ( KiB/s): min=17408, max=91136, per=6.65%, avg=46560.05, stdev=20170.67, samples=20 00:25:53.230 iops : min= 68, max= 356, avg=181.85, stdev=78.77, samples=20 00:25:53.230 lat (msec) : 50=0.21%, 100=2.28%, 250=41.13%, 500=32.68%, 750=21.94% 00:25:53.230 lat (msec) : 1000=1.75% 00:25:53.230 cpu : usr=0.08%, sys=0.54%, ctx=303, majf=0, minf=4097 00:25:53.230 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.7% 00:25:53.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.230 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.230 issued rwts: total=1882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.230 job9: (groupid=0, jobs=1): err= 0: pid=3029008: Sun Nov 17 09:25:56 2024 00:25:53.230 read: IOPS=398, BW=99.5MiB/s (104MB/s)(1007MiB/10114msec) 00:25:53.230 slat (usec): min=13, max=125308, avg=2455.23, stdev=9825.04 00:25:53.230 clat (msec): min=28, max=456, avg=158.16, stdev=84.83 00:25:53.230 lat (msec): min=28, max=458, avg=160.62, stdev=86.39 00:25:53.230 clat percentiles (msec): 00:25:53.230 | 1.00th=[ 41], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 75], 00:25:53.230 | 30.00th=[ 91], 40.00th=[ 120], 50.00th=[ 157], 60.00th=[ 176], 00:25:53.230 | 70.00th=[ 205], 80.00th=[ 232], 90.00th=[ 279], 95.00th=[ 313], 00:25:53.230 | 99.00th=[ 372], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 405], 00:25:53.230 | 99.99th=[ 456] 00:25:53.230 bw ( KiB/s): min=43520, max=227328, per=14.49%, avg=101465.40, stdev=53179.91, samples=20 00:25:53.230 iops : min= 170, max= 888, avg=396.30, stdev=207.71, samples=20 00:25:53.230 lat (msec) : 50=4.10%, 100=30.54%, 250=51.08%, 500=14.28% 00:25:53.230 cpu : usr=0.33%, sys=1.26%, ctx=348, majf=0, minf=4097 00:25:53.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:53.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.230 issued rwts: total=4027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.230 job10: (groupid=0, jobs=1): err= 0: pid=3029009: Sun Nov 17 09:25:56 2024 00:25:53.230 read: IOPS=288, BW=72.2MiB/s (75.7MB/s)(733MiB/10150msec) 00:25:53.230 slat (usec): min=13, max=123755, avg=3143.48, stdev=11407.89 00:25:53.230 clat (msec): min=28, max=673, avg=218.21, stdev=98.92 00:25:53.230 lat (msec): min=28, max=673, avg=221.35, stdev=100.26 00:25:53.230 clat percentiles (msec): 00:25:53.230 | 1.00th=[ 35], 5.00th=[ 83], 10.00th=[ 101], 20.00th=[ 117], 00:25:53.230 | 30.00th=[ 150], 40.00th=[ 184], 50.00th=[ 213], 60.00th=[ 257], 00:25:53.230 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 334], 95.00th=[ 388], 00:25:53.230 | 99.00th=[ 514], 99.50th=[ 567], 99.90th=[ 634], 99.95th=[ 634], 00:25:53.230 | 99.99th=[ 676] 00:25:53.230 bw ( KiB/s): min=38912, max=151040, per=10.49%, avg=73440.20, stdev=32550.41, samples=20 00:25:53.230 iops : min= 152, max= 590, avg=286.85, stdev=127.16, samples=20 00:25:53.230 lat (msec) : 50=1.06%, 100=9.11%, 250=47.48%, 500=41.03%, 750=1.33% 00:25:53.230 cpu : usr=0.16%, sys=1.06%, ctx=342, majf=0, minf=4097 00:25:53.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:53.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:53.230 issued rwts: total=2932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:53.230 00:25:53.230 Run status group 0 (all jobs): 00:25:53.230 READ: bw=684MiB/s (717MB/s), 24.9MiB/s-138MiB/s (26.1MB/s-145MB/s), io=6963MiB (7301MB), run=10114-10184msec 00:25:53.230 00:25:53.230 Disk stats (read/write): 00:25:53.230 nvme0n1: ios=5803/0, merge=0/0, ticks=1235145/0, in_queue=1235145, util=97.25% 00:25:53.230 nvme10n1: ios=2297/0, merge=0/0, ticks=1221138/0, in_queue=1221138, util=97.45% 00:25:53.230 nvme1n1: ios=2216/0, merge=0/0, ticks=1201854/0, in_queue=1201854, util=97.73% 00:25:53.230 nvme2n1: ios=2497/0, merge=0/0, ticks=1223524/0, in_queue=1223524, util=97.87% 00:25:53.230 nvme3n1: ios=1869/0, merge=0/0, ticks=1182031/0, in_queue=1182031, util=97.96% 00:25:53.230 nvme4n1: ios=11076/0, merge=0/0, ticks=1232034/0, in_queue=1232034, util=98.31% 00:25:53.230 nvme5n1: ios=3280/0, merge=0/0, ticks=1215315/0, in_queue=1215315, util=98.46% 00:25:53.230 nvme6n1: ios=7870/0, merge=0/0, ticks=1229074/0, in_queue=1229074, util=98.58% 00:25:53.230 nvme7n1: ios=3610/0, merge=0/0, ticks=1240052/0, in_queue=1240052, util=98.98% 00:25:53.230 nvme8n1: ios=7878/0, merge=0/0, ticks=1234071/0, in_queue=1234071, util=99.15% 00:25:53.230 nvme9n1: ios=5697/0, merge=0/0, ticks=1228012/0, in_queue=1228012, util=99.27% 00:25:53.230 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:53.230 [global] 00:25:53.230 thread=1 00:25:53.230 invalidate=1 00:25:53.230 rw=randwrite 00:25:53.230 time_based=1 00:25:53.230 runtime=10 00:25:53.230 ioengine=libaio 00:25:53.230 direct=1 00:25:53.230 bs=262144 00:25:53.230 iodepth=64 00:25:53.230 norandommap=1 00:25:53.230 numjobs=1 00:25:53.230 00:25:53.230 [job0] 00:25:53.230 filename=/dev/nvme0n1 00:25:53.230 [job1] 00:25:53.230 filename=/dev/nvme10n1 00:25:53.230 [job2] 00:25:53.230 filename=/dev/nvme1n1 00:25:53.230 [job3] 00:25:53.230 filename=/dev/nvme2n1 00:25:53.230 [job4] 00:25:53.230 filename=/dev/nvme3n1 00:25:53.230 [job5] 00:25:53.230 filename=/dev/nvme4n1 00:25:53.230 [job6] 00:25:53.230 filename=/dev/nvme5n1 00:25:53.230 [job7] 00:25:53.230 filename=/dev/nvme6n1 00:25:53.230 [job8] 00:25:53.230 filename=/dev/nvme7n1 00:25:53.230 [job9] 00:25:53.230 filename=/dev/nvme8n1 00:25:53.230 [job10] 00:25:53.230 filename=/dev/nvme9n1 00:25:53.231 Could not set queue depth (nvme0n1) 00:25:53.231 Could not set queue depth (nvme10n1) 00:25:53.231 Could not set queue depth (nvme1n1) 00:25:53.231 Could not set queue depth (nvme2n1) 00:25:53.231 Could not set queue depth (nvme3n1) 00:25:53.231 Could not set queue depth (nvme4n1) 00:25:53.231 Could not set queue depth (nvme5n1) 00:25:53.231 Could not set queue depth (nvme6n1) 00:25:53.231 Could not set queue depth (nvme7n1) 00:25:53.231 Could not set queue depth (nvme8n1) 00:25:53.231 Could not set queue depth (nvme9n1) 00:25:53.231 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:53.231 fio-3.35 00:25:53.231 Starting 11 threads 00:26:03.201 00:26:03.201 job0: (groupid=0, jobs=1): err= 0: pid=3029732: Sun Nov 17 09:26:07 2024 00:26:03.201 write: IOPS=306, BW=76.6MiB/s (80.3MB/s)(773MiB/10082msec); 0 zone resets 00:26:03.201 slat (usec): min=20, max=132754, avg=2095.18, stdev=6767.59 00:26:03.201 clat (usec): min=1721, max=895946, avg=206596.67, stdev=170918.06 00:26:03.201 lat (usec): min=1758, max=895985, avg=208691.85, stdev=172402.57 00:26:03.201 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 45], 00:26:03.202 | 30.00th=[ 68], 40.00th=[ 117], 50.00th=[ 159], 60.00th=[ 197], 00:26:03.202 | 70.00th=[ 305], 80.00th=[ 380], 90.00th=[ 422], 95.00th=[ 535], 00:26:03.202 | 99.00th=[ 701], 99.50th=[ 760], 99.90th=[ 885], 99.95th=[ 894], 00:26:03.202 | 99.99th=[ 894] 00:26:03.202 bw ( KiB/s): min=32256, max=244224, per=9.25%, avg=77496.05, stdev=53307.72, samples=20 00:26:03.202 iops : min= 126, max= 954, avg=302.70, stdev=208.24, samples=20 00:26:03.202 lat (msec) : 2=0.03%, 4=0.23%, 10=2.30%, 20=2.27%, 50=20.94% 00:26:03.202 lat (msec) : 100=7.28%, 250=32.14%, 500=28.83%, 750=5.44%, 1000=0.55% 00:26:03.202 cpu : usr=0.92%, sys=1.16%, ctx=1939, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,3090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job1: (groupid=0, jobs=1): err= 0: pid=3029744: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=404, BW=101MiB/s (106MB/s)(1036MiB/10254msec); 0 zone resets 00:26:03.202 slat (usec): min=22, max=204209, avg=1413.55, stdev=5824.47 00:26:03.202 clat (usec): min=1490, max=865421, avg=156862.43, stdev=145197.02 00:26:03.202 lat (usec): min=1594, max=865506, avg=158275.98, stdev=146517.45 00:26:03.202 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 38], 00:26:03.202 | 30.00th=[ 60], 40.00th=[ 75], 50.00th=[ 110], 60.00th=[ 144], 00:26:03.202 | 70.00th=[ 180], 80.00th=[ 288], 90.00th=[ 401], 95.00th=[ 426], 00:26:03.202 | 99.00th=[ 575], 99.50th=[ 726], 99.90th=[ 835], 99.95th=[ 860], 00:26:03.202 | 99.99th=[ 869] 00:26:03.202 bw ( KiB/s): min=34304, max=289280, per=12.46%, avg=104439.40, stdev=69520.78, samples=20 00:26:03.202 iops : min= 134, max= 1130, avg=407.95, stdev=271.57, samples=20 00:26:03.202 lat (msec) : 2=0.34%, 4=0.87%, 10=2.17%, 20=4.54%, 50=18.61% 00:26:03.202 lat (msec) : 100=21.21%, 250=31.06%, 500=19.84%, 750=1.09%, 1000=0.29% 00:26:03.202 cpu : usr=0.97%, sys=1.71%, ctx=2781, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,4144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job2: (groupid=0, jobs=1): err= 0: pid=3029745: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=182, BW=45.5MiB/s (47.7MB/s)(462MiB/10151msec); 0 zone resets 00:26:03.202 slat (usec): min=22, max=185508, avg=5075.56, stdev=12272.66 00:26:03.202 clat (msec): min=15, max=965, avg=346.32, stdev=182.97 00:26:03.202 lat (msec): min=15, max=965, avg=351.40, stdev=185.43 00:26:03.202 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 37], 5.00th=[ 101], 10.00th=[ 159], 20.00th=[ 211], 00:26:03.202 | 30.00th=[ 249], 40.00th=[ 271], 50.00th=[ 305], 60.00th=[ 372], 00:26:03.202 | 70.00th=[ 401], 80.00th=[ 439], 90.00th=[ 617], 95.00th=[ 726], 00:26:03.202 | 99.00th=[ 919], 99.50th=[ 936], 99.90th=[ 969], 99.95th=[ 969], 00:26:03.202 | 99.99th=[ 969] 00:26:03.202 bw ( KiB/s): min=16384, max=87552, per=5.45%, avg=45690.25, stdev=21053.23, samples=20 00:26:03.202 iops : min= 64, max= 342, avg=178.45, stdev=82.22, samples=20 00:26:03.202 lat (msec) : 20=0.05%, 50=1.57%, 100=3.35%, 250=26.95%, 500=52.49% 00:26:03.202 lat (msec) : 750=10.71%, 1000=4.87% 00:26:03.202 cpu : usr=0.57%, sys=0.63%, ctx=578, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,1848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job3: (groupid=0, jobs=1): err= 0: pid=3029746: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=210, BW=52.7MiB/s (55.2MB/s)(535MiB/10151msec); 0 zone resets 00:26:03.202 slat (usec): min=24, max=122452, avg=2786.86, stdev=8533.79 00:26:03.202 clat (usec): min=1120, max=939109, avg=300765.40, stdev=169758.75 00:26:03.202 lat (usec): min=1183, max=948361, avg=303552.26, stdev=171088.05 00:26:03.202 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 5], 5.00th=[ 31], 10.00th=[ 68], 20.00th=[ 163], 00:26:03.202 | 30.00th=[ 226], 40.00th=[ 255], 50.00th=[ 292], 60.00th=[ 355], 00:26:03.202 | 70.00th=[ 397], 80.00th=[ 414], 90.00th=[ 451], 95.00th=[ 600], 00:26:03.202 | 99.00th=[ 860], 99.50th=[ 885], 99.90th=[ 919], 99.95th=[ 927], 00:26:03.202 | 99.99th=[ 936] 00:26:03.202 bw ( KiB/s): min=22016, max=113152, per=6.34%, avg=53141.75, stdev=20533.11, samples=20 00:26:03.202 iops : min= 86, max= 442, avg=207.55, stdev=80.23, samples=20 00:26:03.202 lat (msec) : 2=0.33%, 4=0.47%, 10=1.36%, 20=1.40%, 50=3.93% 00:26:03.202 lat (msec) : 100=7.15%, 250=22.30%, 500=55.07%, 750=5.84%, 1000=2.15% 00:26:03.202 cpu : usr=0.58%, sys=0.73%, ctx=1250, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,2139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job4: (groupid=0, jobs=1): err= 0: pid=3029747: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=319, BW=79.8MiB/s (83.7MB/s)(819MiB/10254msec); 0 zone resets 00:26:03.202 slat (usec): min=16, max=69650, avg=2338.01, stdev=6076.93 00:26:03.202 clat (usec): min=1913, max=646634, avg=197990.27, stdev=118182.06 00:26:03.202 lat (usec): min=1950, max=646691, avg=200328.28, stdev=119536.06 00:26:03.202 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 80], 20.00th=[ 116], 00:26:03.202 | 30.00th=[ 126], 40.00th=[ 131], 50.00th=[ 174], 60.00th=[ 197], 00:26:03.202 | 70.00th=[ 222], 80.00th=[ 300], 90.00th=[ 397], 95.00th=[ 418], 00:26:03.202 | 99.00th=[ 514], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 634], 00:26:03.202 | 99.99th=[ 651] 00:26:03.202 bw ( KiB/s): min=36864, max=158208, per=9.81%, avg=82169.55, stdev=36558.78, samples=20 00:26:03.202 iops : min= 144, max= 618, avg=320.90, stdev=142.75, samples=20 00:26:03.202 lat (msec) : 2=0.09%, 4=0.18%, 10=2.11%, 20=1.68%, 50=3.51% 00:26:03.202 lat (msec) : 100=4.83%, 250=60.60%, 500=25.93%, 750=1.07% 00:26:03.202 cpu : usr=0.89%, sys=0.99%, ctx=1664, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,3274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job5: (groupid=0, jobs=1): err= 0: pid=3029748: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=230, BW=57.6MiB/s (60.4MB/s)(585MiB/10151msec); 0 zone resets 00:26:03.202 slat (usec): min=26, max=258356, avg=2704.43, stdev=9619.91 00:26:03.202 clat (usec): min=1031, max=912931, avg=274449.06, stdev=175755.24 00:26:03.202 lat (usec): min=1094, max=926885, avg=277153.50, stdev=177072.23 00:26:03.202 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 29], 20.00th=[ 128], 00:26:03.202 | 30.00th=[ 188], 40.00th=[ 243], 50.00th=[ 271], 60.00th=[ 305], 00:26:03.202 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 477], 95.00th=[ 617], 00:26:03.202 | 99.00th=[ 852], 99.50th=[ 877], 99.90th=[ 911], 99.95th=[ 911], 00:26:03.202 | 99.99th=[ 911] 00:26:03.202 bw ( KiB/s): min=23552, max=132608, per=6.96%, avg=58281.25, stdev=30513.73, samples=20 00:26:03.202 iops : min= 92, max= 518, avg=227.60, stdev=119.21, samples=20 00:26:03.202 lat (msec) : 2=0.34%, 4=2.09%, 10=3.46%, 20=3.29%, 50=4.27% 00:26:03.202 lat (msec) : 100=4.62%, 250=25.34%, 500=47.82%, 750=6.75%, 1000=2.01% 00:26:03.202 cpu : usr=0.79%, sys=0.71%, ctx=1252, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,2340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job6: (groupid=0, jobs=1): err= 0: pid=3029749: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=333, BW=83.3MiB/s (87.4MB/s)(855MiB/10256msec); 0 zone resets 00:26:03.202 slat (usec): min=14, max=287990, avg=2242.77, stdev=9279.78 00:26:03.202 clat (usec): min=1098, max=1090.6k, avg=189328.79, stdev=162217.64 00:26:03.202 lat (usec): min=1143, max=1092.9k, avg=191571.57, stdev=163524.39 00:26:03.202 clat percentiles (msec): 00:26:03.202 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 59], 00:26:03.202 | 30.00th=[ 62], 40.00th=[ 77], 50.00th=[ 148], 60.00th=[ 194], 00:26:03.202 | 70.00th=[ 255], 80.00th=[ 326], 90.00th=[ 405], 95.00th=[ 447], 00:26:03.202 | 99.00th=[ 760], 99.50th=[ 953], 99.90th=[ 1070], 99.95th=[ 1083], 00:26:03.202 | 99.99th=[ 1099] 00:26:03.202 bw ( KiB/s): min=20480, max=269824, per=10.25%, avg=85888.00, stdev=65679.19, samples=20 00:26:03.202 iops : min= 80, max= 1054, avg=335.50, stdev=256.56, samples=20 00:26:03.202 lat (msec) : 2=0.38%, 4=0.47%, 10=0.38%, 20=1.26%, 50=10.01% 00:26:03.202 lat (msec) : 100=30.08%, 250=26.30%, 500=27.88%, 750=2.22%, 1000=0.64% 00:26:03.202 lat (msec) : 2000=0.38% 00:26:03.202 cpu : usr=0.87%, sys=1.21%, ctx=1463, majf=0, minf=1 00:26:03.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:03.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.202 issued rwts: total=0,3418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.202 job7: (groupid=0, jobs=1): err= 0: pid=3029750: Sun Nov 17 09:26:07 2024 00:26:03.202 write: IOPS=334, BW=83.6MiB/s (87.7MB/s)(853MiB/10200msec); 0 zone resets 00:26:03.203 slat (usec): min=15, max=236250, avg=2177.43, stdev=8440.29 00:26:03.203 clat (usec): min=1041, max=654952, avg=189041.95, stdev=140106.90 00:26:03.203 lat (usec): min=1092, max=654985, avg=191219.38, stdev=141663.69 00:26:03.203 clat percentiles (msec): 00:26:03.203 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 51], 00:26:03.203 | 30.00th=[ 112], 40.00th=[ 133], 50.00th=[ 186], 60.00th=[ 199], 00:26:03.203 | 70.00th=[ 230], 80.00th=[ 292], 90.00th=[ 405], 95.00th=[ 464], 00:26:03.203 | 99.00th=[ 575], 99.50th=[ 609], 99.90th=[ 634], 99.95th=[ 651], 00:26:03.203 | 99.99th=[ 659] 00:26:03.203 bw ( KiB/s): min=34816, max=216064, per=10.23%, avg=85715.50, stdev=44286.61, samples=20 00:26:03.203 iops : min= 136, max= 844, avg=334.80, stdev=173.01, samples=20 00:26:03.203 lat (msec) : 2=0.50%, 4=0.67%, 10=6.13%, 20=7.86%, 50=4.84% 00:26:03.203 lat (msec) : 100=6.39%, 250=46.58%, 500=23.89%, 750=3.14% 00:26:03.203 cpu : usr=0.88%, sys=1.01%, ctx=1912, majf=0, minf=1 00:26:03.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:03.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.203 issued rwts: total=0,3411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.203 job8: (groupid=0, jobs=1): err= 0: pid=3029751: Sun Nov 17 09:26:07 2024 00:26:03.203 write: IOPS=256, BW=64.1MiB/s (67.2MB/s)(656MiB/10232msec); 0 zone resets 00:26:03.203 slat (usec): min=18, max=213037, avg=2383.99, stdev=9458.87 00:26:03.203 clat (usec): min=1218, max=904395, avg=247071.85, stdev=184599.65 00:26:03.203 lat (usec): min=1248, max=904440, avg=249455.84, stdev=186498.60 00:26:03.203 clat percentiles (msec): 00:26:03.203 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 33], 20.00th=[ 77], 00:26:03.203 | 30.00th=[ 138], 40.00th=[ 182], 50.00th=[ 226], 60.00th=[ 271], 00:26:03.203 | 70.00th=[ 305], 80.00th=[ 363], 90.00th=[ 510], 95.00th=[ 634], 00:26:03.203 | 99.00th=[ 827], 99.50th=[ 860], 99.90th=[ 902], 99.95th=[ 902], 00:26:03.203 | 99.99th=[ 902] 00:26:03.203 bw ( KiB/s): min=16384, max=121856, per=7.82%, avg=65540.30, stdev=28327.13, samples=20 00:26:03.203 iops : min= 64, max= 476, avg=256.00, stdev=110.67, samples=20 00:26:03.203 lat (msec) : 2=0.53%, 4=0.99%, 10=3.66%, 20=1.64%, 50=7.21% 00:26:03.203 lat (msec) : 100=9.87%, 250=32.06%, 500=33.82%, 750=8.08%, 1000=2.13% 00:26:03.203 cpu : usr=0.63%, sys=0.65%, ctx=1628, majf=0, minf=1 00:26:03.203 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:03.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.203 issued rwts: total=0,2623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.203 job9: (groupid=0, jobs=1): err= 0: pid=3029752: Sun Nov 17 09:26:07 2024 00:26:03.203 write: IOPS=354, BW=88.6MiB/s (92.9MB/s)(891MiB/10056msec); 0 zone resets 00:26:03.203 slat (usec): min=15, max=115329, avg=2148.04, stdev=6999.21 00:26:03.203 clat (usec): min=1396, max=944725, avg=178326.75, stdev=164268.26 00:26:03.203 lat (msec): min=2, max=944, avg=180.47, stdev=166.21 00:26:03.203 clat percentiles (msec): 00:26:03.203 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 72], 00:26:03.203 | 30.00th=[ 78], 40.00th=[ 86], 50.00th=[ 117], 60.00th=[ 148], 00:26:03.203 | 70.00th=[ 209], 80.00th=[ 268], 90.00th=[ 376], 95.00th=[ 550], 00:26:03.203 | 99.00th=[ 835], 99.50th=[ 877], 99.90th=[ 919], 99.95th=[ 944], 00:26:03.203 | 99.99th=[ 944] 00:26:03.203 bw ( KiB/s): min=14336, max=225792, per=10.70%, avg=89657.05, stdev=61578.89, samples=20 00:26:03.203 iops : min= 56, max= 882, avg=350.20, stdev=240.55, samples=20 00:26:03.203 lat (msec) : 2=0.08%, 4=0.20%, 10=1.07%, 20=0.62%, 50=4.10% 00:26:03.203 lat (msec) : 100=38.93%, 250=32.29%, 500=17.08%, 750=3.56%, 1000=2.08% 00:26:03.203 cpu : usr=0.82%, sys=0.94%, ctx=1721, majf=0, minf=1 00:26:03.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:03.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.203 issued rwts: total=0,3565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.203 job10: (groupid=0, jobs=1): err= 0: pid=3029753: Sun Nov 17 09:26:07 2024 00:26:03.203 write: IOPS=362, BW=90.7MiB/s (95.1MB/s)(929MiB/10246msec); 0 zone resets 00:26:03.203 slat (usec): min=18, max=124894, avg=1396.61, stdev=4760.92 00:26:03.203 clat (usec): min=1239, max=1030.6k, avg=174929.61, stdev=151100.59 00:26:03.203 lat (usec): min=1276, max=1030.6k, avg=176326.23, stdev=151699.78 00:26:03.203 clat percentiles (msec): 00:26:03.203 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 32], 20.00th=[ 63], 00:26:03.203 | 30.00th=[ 91], 40.00th=[ 118], 50.00th=[ 130], 60.00th=[ 161], 00:26:03.203 | 70.00th=[ 203], 80.00th=[ 271], 90.00th=[ 363], 95.00th=[ 430], 00:26:03.203 | 99.00th=[ 818], 99.50th=[ 961], 99.90th=[ 1028], 99.95th=[ 1028], 00:26:03.203 | 99.99th=[ 1028] 00:26:03.203 bw ( KiB/s): min=29696, max=169472, per=11.17%, avg=93557.30, stdev=36819.55, samples=20 00:26:03.203 iops : min= 116, max= 662, avg=365.45, stdev=143.81, samples=20 00:26:03.203 lat (msec) : 2=0.19%, 4=0.89%, 10=3.90%, 20=3.34%, 50=4.74% 00:26:03.203 lat (msec) : 100=18.75%, 250=45.57%, 500=19.91%, 750=1.02%, 1000=1.40% 00:26:03.203 lat (msec) : 2000=0.30% 00:26:03.203 cpu : usr=1.07%, sys=1.20%, ctx=2338, majf=0, minf=2 00:26:03.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:03.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.203 issued rwts: total=0,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.203 00:26:03.203 Run status group 0 (all jobs): 00:26:03.203 WRITE: bw=818MiB/s (858MB/s), 45.5MiB/s-101MiB/s (47.7MB/s-106MB/s), io=8392MiB (8800MB), run=10056-10256msec 00:26:03.203 00:26:03.203 Disk stats (read/write): 00:26:03.203 nvme0n1: ios=48/5973, merge=0/0, ticks=1338/1222268, in_queue=1223606, util=99.80% 00:26:03.203 nvme10n1: ios=45/8221, merge=0/0, ticks=85/1242075, in_queue=1242160, util=97.82% 00:26:03.203 nvme1n1: ios=45/3525, merge=0/0, ticks=6491/1184530, in_queue=1191021, util=99.91% 00:26:03.203 nvme2n1: ios=43/4108, merge=0/0, ticks=4774/1210168, in_queue=1214942, util=99.92% 00:26:03.203 nvme3n1: ios=0/6480, merge=0/0, ticks=0/1233486, in_queue=1233486, util=97.72% 00:26:03.203 nvme4n1: ios=42/4516, merge=0/0, ticks=3295/1192914, in_queue=1196209, util=99.92% 00:26:03.203 nvme5n1: ios=42/6775, merge=0/0, ticks=4136/1206370, in_queue=1210506, util=99.99% 00:26:03.203 nvme6n1: ios=45/6800, merge=0/0, ticks=3306/1221600, in_queue=1224906, util=99.95% 00:26:03.203 nvme7n1: ios=44/5204, merge=0/0, ticks=3397/1217674, in_queue=1221071, util=99.95% 00:26:03.203 nvme8n1: ios=0/6831, merge=0/0, ticks=0/1219351, in_queue=1219351, util=98.98% 00:26:03.203 nvme9n1: ios=0/7381, merge=0/0, ticks=0/1240746, in_queue=1240746, util=99.13% 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:03.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:03.203 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.203 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.203 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.203 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.203 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:03.462 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.462 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:04.028 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.028 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:04.028 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:04.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:04.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:04.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:04.029 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:04.029 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:04.029 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.287 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:04.544 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:04.544 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:04.544 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:04.802 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:04.802 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:04.803 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.803 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:04.803 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.803 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.803 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:05.061 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.061 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:05.319 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.319 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:05.577 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:05.577 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.577 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.577 rmmod nvme_tcp 00:26:05.577 rmmod nvme_fabrics 00:26:05.836 rmmod nvme_keyring 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3024599 ']' 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3024599 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3024599 ']' 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3024599 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3024599 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3024599' 00:26:05.836 killing process with pid 3024599 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3024599 00:26:05.836 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3024599 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.119 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.025 00:26:11.025 real 1m5.431s 00:26:11.025 user 3m50.234s 00:26:11.025 sys 0m15.807s 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.025 ************************************ 00:26:11.025 END TEST nvmf_multiconnection 00:26:11.025 ************************************ 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:11.025 ************************************ 00:26:11.025 START TEST nvmf_initiator_timeout 00:26:11.025 ************************************ 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.025 * Looking for test storage... 00:26:11.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:11.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.025 --rc genhtml_branch_coverage=1 00:26:11.025 --rc genhtml_function_coverage=1 00:26:11.025 --rc genhtml_legend=1 00:26:11.025 --rc geninfo_all_blocks=1 00:26:11.025 --rc geninfo_unexecuted_blocks=1 00:26:11.025 00:26:11.025 ' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:11.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.025 --rc genhtml_branch_coverage=1 00:26:11.025 --rc genhtml_function_coverage=1 00:26:11.025 --rc genhtml_legend=1 00:26:11.025 --rc geninfo_all_blocks=1 00:26:11.025 --rc geninfo_unexecuted_blocks=1 00:26:11.025 00:26:11.025 ' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:11.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.025 --rc genhtml_branch_coverage=1 00:26:11.025 --rc genhtml_function_coverage=1 00:26:11.025 --rc genhtml_legend=1 00:26:11.025 --rc geninfo_all_blocks=1 00:26:11.025 --rc geninfo_unexecuted_blocks=1 00:26:11.025 00:26:11.025 ' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:11.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.025 --rc genhtml_branch_coverage=1 00:26:11.025 --rc genhtml_function_coverage=1 00:26:11.025 --rc genhtml_legend=1 00:26:11.025 --rc geninfo_all_blocks=1 00:26:11.025 --rc geninfo_unexecuted_blocks=1 00:26:11.025 00:26:11.025 ' 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.025 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.026 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:12.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:12.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:12.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:12.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.928 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.929 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:26:13.187 00:26:13.187 --- 10.0.0.2 ping statistics --- 00:26:13.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.187 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:26:13.187 00:26:13.187 --- 10.0.0.1 ping statistics --- 00:26:13.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.187 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.187 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3033334 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3033334 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3033334 ']' 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.188 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.188 [2024-11-17 09:26:18.079806] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:13.188 [2024-11-17 09:26:18.079947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.446 [2024-11-17 09:26:18.226168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.446 [2024-11-17 09:26:18.350648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.446 [2024-11-17 09:26:18.350735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.446 [2024-11-17 09:26:18.350760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.446 [2024-11-17 09:26:18.350781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.446 [2024-11-17 09:26:18.350797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.446 [2024-11-17 09:26:18.353374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.446 [2024-11-17 09:26:18.353435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.446 [2024-11-17 09:26:18.353477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.446 [2024-11-17 09:26:18.353483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 Malloc0 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 Delay0 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 [2024-11-17 09:26:19.215712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:14.428 [2024-11-17 09:26:19.245422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.428 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:15.017 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:15.017 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.017 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.017 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.017 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3033776 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:16.916 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:16.916 [global] 00:26:16.916 thread=1 00:26:16.916 invalidate=1 00:26:16.916 rw=write 00:26:16.916 time_based=1 00:26:16.916 runtime=60 00:26:16.916 ioengine=libaio 00:26:16.916 direct=1 00:26:16.916 bs=4096 00:26:16.916 iodepth=1 00:26:16.916 norandommap=0 00:26:16.916 numjobs=1 00:26:16.916 00:26:16.916 verify_dump=1 00:26:16.916 verify_backlog=512 00:26:16.916 verify_state_save=0 00:26:16.916 do_verify=1 00:26:16.916 verify=crc32c-intel 00:26:16.916 [job0] 00:26:16.916 filename=/dev/nvme0n1 00:26:16.916 Could not set queue depth (nvme0n1) 00:26:17.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:17.174 fio-3.35 00:26:17.174 Starting 1 thread 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.454 true 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.454 true 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.454 true 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.454 true 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.454 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.983 true 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.983 true 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.983 true 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:22.983 true 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:22.983 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3033776 00:27:19.194 00:27:19.194 job0: (groupid=0, jobs=1): err= 0: pid=3033845: Sun Nov 17 09:27:22 2024 00:27:19.194 read: IOPS=146, BW=585KiB/s (599kB/s)(34.3MiB/60013msec) 00:27:19.194 slat (usec): min=5, max=15275, avg=15.00, stdev=204.52 00:27:19.194 clat (usec): min=258, max=41269k, avg=6490.22, stdev=440479.10 00:27:19.194 lat (usec): min=264, max=41270k, avg=6505.22, stdev=440479.36 00:27:19.194 clat percentiles (usec): 00:27:19.194 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 302], 00:27:19.194 | 20.00th=[ 318], 30.00th=[ 330], 40.00th=[ 351], 00:27:19.194 | 50.00th=[ 375], 60.00th=[ 388], 70.00th=[ 400], 00:27:19.194 | 80.00th=[ 412], 90.00th=[ 449], 95.00th=[ 537], 00:27:19.194 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:27:19.194 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:19.194 write: IOPS=153, BW=614KiB/s (629kB/s)(36.0MiB/60013msec); 0 zone resets 00:27:19.194 slat (usec): min=6, max=30322, avg=19.68, stdev=315.84 00:27:19.194 clat (usec): min=203, max=1365, avg=286.97, stdev=43.38 00:27:19.194 lat (usec): min=211, max=30723, avg=306.65, stdev=320.73 00:27:19.194 clat percentiles (usec): 00:27:19.194 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 251], 00:27:19.194 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:27:19.194 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 371], 00:27:19.194 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 445], 99.95th=[ 465], 00:27:19.194 | 99.99th=[ 1369] 00:27:19.194 bw ( KiB/s): min= 4096, max= 7616, per=100.00%, avg=5266.29, stdev=1274.22, samples=14 00:27:19.194 iops : min= 1024, max= 1904, avg=1316.57, stdev=318.56, samples=14 00:27:19.194 lat (usec) : 250=10.21%, 500=86.50%, 750=1.56%, 1000=0.01% 00:27:19.194 lat (msec) : 2=0.02%, 4=0.01%, 50=1.70%, >=2000=0.01% 00:27:19.194 cpu : usr=0.29%, sys=0.56%, ctx=18002, majf=0, minf=1 00:27:19.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.194 issued rwts: total=8780,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:19.194 00:27:19.194 Run status group 0 (all jobs): 00:27:19.194 READ: bw=585KiB/s (599kB/s), 585KiB/s-585KiB/s (599kB/s-599kB/s), io=34.3MiB (36.0MB), run=60013-60013msec 00:27:19.194 WRITE: bw=614KiB/s (629kB/s), 614KiB/s-614KiB/s (629kB/s-629kB/s), io=36.0MiB (37.7MB), run=60013-60013msec 00:27:19.194 00:27:19.194 Disk stats (read/write): 00:27:19.194 nvme0n1: ios=8829/9216, merge=0/0, ticks=16889/2496, in_queue=19385, util=99.85% 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:19.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:19.194 nvmf hotplug test: fio successful as expected 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:19.194 rmmod nvme_tcp 00:27:19.194 rmmod nvme_fabrics 00:27:19.194 rmmod nvme_keyring 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:19.194 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3033334 ']' 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3033334 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3033334 ']' 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3033334 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033334 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033334' 00:27:19.195 killing process with pid 3033334 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3033334 00:27:19.195 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3033334 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.195 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:21.101 00:27:21.101 real 1m10.118s 00:27:21.101 user 4m14.979s 00:27:21.101 sys 0m7.998s 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:21.101 ************************************ 00:27:21.101 END TEST nvmf_initiator_timeout 00:27:21.101 ************************************ 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.101 09:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:23.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:23.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:23.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:23.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:23.004 ************************************ 00:27:23.004 START TEST nvmf_perf_adq 00:27:23.004 ************************************ 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:23.004 * Looking for test storage... 00:27:23.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:23.004 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.264 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:23.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.265 --rc genhtml_branch_coverage=1 00:27:23.265 --rc genhtml_function_coverage=1 00:27:23.265 --rc genhtml_legend=1 00:27:23.265 --rc geninfo_all_blocks=1 00:27:23.265 --rc geninfo_unexecuted_blocks=1 00:27:23.265 00:27:23.265 ' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:23.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.265 --rc genhtml_branch_coverage=1 00:27:23.265 --rc genhtml_function_coverage=1 00:27:23.265 --rc genhtml_legend=1 00:27:23.265 --rc geninfo_all_blocks=1 00:27:23.265 --rc geninfo_unexecuted_blocks=1 00:27:23.265 00:27:23.265 ' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:23.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.265 --rc genhtml_branch_coverage=1 00:27:23.265 --rc genhtml_function_coverage=1 00:27:23.265 --rc genhtml_legend=1 00:27:23.265 --rc geninfo_all_blocks=1 00:27:23.265 --rc geninfo_unexecuted_blocks=1 00:27:23.265 00:27:23.265 ' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:23.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.265 --rc genhtml_branch_coverage=1 00:27:23.265 --rc genhtml_function_coverage=1 00:27:23.265 --rc genhtml_legend=1 00:27:23.265 --rc geninfo_all_blocks=1 00:27:23.265 --rc geninfo_unexecuted_blocks=1 00:27:23.265 00:27:23.265 ' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:23.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.265 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:25.169 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:25.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:25.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:25.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:25.169 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:26.104 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:28.638 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.916 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.917 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.917 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.917 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.917 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:27:33.917 00:27:33.917 --- 10.0.0.2 ping statistics --- 00:27:33.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.917 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:27:33.917 00:27:33.917 --- 10.0.0.1 ping statistics --- 00:27:33.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.917 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3046228 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3046228 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3046228 ']' 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.917 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.917 [2024-11-17 09:27:38.459103] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:33.917 [2024-11-17 09:27:38.459245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.917 [2024-11-17 09:27:38.606309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.917 [2024-11-17 09:27:38.746408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.917 [2024-11-17 09:27:38.746497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.917 [2024-11-17 09:27:38.746523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.917 [2024-11-17 09:27:38.746547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.918 [2024-11-17 09:27:38.746568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.918 [2024-11-17 09:27:38.749576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.918 [2024-11-17 09:27:38.749650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.918 [2024-11-17 09:27:38.749743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.918 [2024-11-17 09:27:38.749753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.483 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.741 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.999 [2024-11-17 09:27:39.866002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.999 Malloc1 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:34.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.000 [2024-11-17 09:27:39.981592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3046401 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:35.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:37.531 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:37.532 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.532 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:37.532 "tick_rate": 2700000000, 00:27:37.532 "poll_groups": [ 00:27:37.532 { 00:27:37.532 "name": "nvmf_tgt_poll_group_000", 00:27:37.532 "admin_qpairs": 1, 00:27:37.532 "io_qpairs": 1, 00:27:37.532 "current_admin_qpairs": 1, 00:27:37.532 "current_io_qpairs": 1, 00:27:37.532 "pending_bdev_io": 0, 00:27:37.532 "completed_nvme_io": 16482, 00:27:37.532 "transports": [ 00:27:37.532 { 00:27:37.532 "trtype": "TCP" 00:27:37.532 } 00:27:37.532 ] 00:27:37.532 }, 00:27:37.532 { 00:27:37.532 "name": "nvmf_tgt_poll_group_001", 00:27:37.532 "admin_qpairs": 0, 00:27:37.532 "io_qpairs": 1, 00:27:37.532 "current_admin_qpairs": 0, 00:27:37.532 "current_io_qpairs": 1, 00:27:37.532 "pending_bdev_io": 0, 00:27:37.532 "completed_nvme_io": 17046, 00:27:37.532 "transports": [ 00:27:37.532 { 00:27:37.532 "trtype": "TCP" 00:27:37.532 } 00:27:37.532 ] 00:27:37.532 }, 00:27:37.532 { 00:27:37.532 "name": "nvmf_tgt_poll_group_002", 00:27:37.532 "admin_qpairs": 0, 00:27:37.532 "io_qpairs": 1, 00:27:37.532 "current_admin_qpairs": 0, 00:27:37.532 "current_io_qpairs": 1, 00:27:37.532 "pending_bdev_io": 0, 00:27:37.532 "completed_nvme_io": 16340, 00:27:37.532 "transports": [ 00:27:37.532 { 00:27:37.532 "trtype": "TCP" 00:27:37.532 } 00:27:37.532 ] 00:27:37.532 }, 00:27:37.532 { 00:27:37.532 "name": "nvmf_tgt_poll_group_003", 00:27:37.532 "admin_qpairs": 0, 00:27:37.532 "io_qpairs": 1, 00:27:37.532 "current_admin_qpairs": 0, 00:27:37.532 "current_io_qpairs": 1, 00:27:37.532 "pending_bdev_io": 0, 00:27:37.532 "completed_nvme_io": 16956, 00:27:37.532 "transports": [ 00:27:37.532 { 00:27:37.532 "trtype": "TCP" 00:27:37.532 } 00:27:37.532 ] 00:27:37.532 } 00:27:37.532 ] 00:27:37.532 }' 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:37.532 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3046401 00:27:45.711 Initializing NVMe Controllers 00:27:45.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:45.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:45.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:45.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:45.711 Initialization complete. Launching workers. 00:27:45.711 ======================================================== 00:27:45.711 Latency(us) 00:27:45.711 Device Information : IOPS MiB/s Average min max 00:27:45.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8832.70 34.50 7249.23 3236.46 11882.84 00:27:45.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9140.50 35.71 7002.36 2978.23 11807.27 00:27:45.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8751.50 34.19 7315.13 3138.31 11829.44 00:27:45.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8989.20 35.11 7122.62 2897.19 12802.80 00:27:45.711 ======================================================== 00:27:45.711 Total : 35713.89 139.51 7170.33 2897.19 12802.80 00:27:45.711 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:45.711 rmmod nvme_tcp 00:27:45.711 rmmod nvme_fabrics 00:27:45.711 rmmod nvme_keyring 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3046228 ']' 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3046228 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3046228 ']' 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3046228 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046228 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046228' 00:27:45.711 killing process with pid 3046228 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3046228 00:27:45.711 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3046228 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.647 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.179 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:49.179 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:49.179 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:49.179 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:49.437 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:51.968 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:57.235 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:57.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:27:57.236 00:27:57.236 --- 10.0.0.2 ping statistics --- 00:27:57.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.236 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:27:57.236 00:27:57.236 --- 10.0.0.1 ping statistics --- 00:27:57.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.236 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:57.236 net.core.busy_poll = 1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:57.236 net.core.busy_read = 1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3049151 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3049151 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3049151 ']' 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.236 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.236 [2024-11-17 09:28:02.058866] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:57.236 [2024-11-17 09:28:02.059013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.236 [2024-11-17 09:28:02.203952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.494 [2024-11-17 09:28:02.342517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.494 [2024-11-17 09:28:02.342608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.494 [2024-11-17 09:28:02.342634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.494 [2024-11-17 09:28:02.342658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.494 [2024-11-17 09:28:02.342678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.494 [2024-11-17 09:28:02.345776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.494 [2024-11-17 09:28:02.345850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.494 [2024-11-17 09:28:02.345943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.494 [2024-11-17 09:28:02.345949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.427 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 [2024-11-17 09:28:03.499913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 Malloc1 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.685 [2024-11-17 09:28:03.618320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3049426 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:58.685 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:01.217 "tick_rate": 2700000000, 00:28:01.217 "poll_groups": [ 00:28:01.217 { 00:28:01.217 "name": "nvmf_tgt_poll_group_000", 00:28:01.217 "admin_qpairs": 1, 00:28:01.217 "io_qpairs": 2, 00:28:01.217 "current_admin_qpairs": 1, 00:28:01.217 "current_io_qpairs": 2, 00:28:01.217 "pending_bdev_io": 0, 00:28:01.217 "completed_nvme_io": 18787, 00:28:01.217 "transports": [ 00:28:01.217 { 00:28:01.217 "trtype": "TCP" 00:28:01.217 } 00:28:01.217 ] 00:28:01.217 }, 00:28:01.217 { 00:28:01.217 "name": "nvmf_tgt_poll_group_001", 00:28:01.217 "admin_qpairs": 0, 00:28:01.217 "io_qpairs": 2, 00:28:01.217 "current_admin_qpairs": 0, 00:28:01.217 "current_io_qpairs": 2, 00:28:01.217 "pending_bdev_io": 0, 00:28:01.217 "completed_nvme_io": 18415, 00:28:01.217 "transports": [ 00:28:01.217 { 00:28:01.217 "trtype": "TCP" 00:28:01.217 } 00:28:01.217 ] 00:28:01.217 }, 00:28:01.217 { 00:28:01.217 "name": "nvmf_tgt_poll_group_002", 00:28:01.217 "admin_qpairs": 0, 00:28:01.217 "io_qpairs": 0, 00:28:01.217 "current_admin_qpairs": 0, 00:28:01.217 "current_io_qpairs": 0, 00:28:01.217 "pending_bdev_io": 0, 00:28:01.217 "completed_nvme_io": 0, 00:28:01.217 "transports": [ 00:28:01.217 { 00:28:01.217 "trtype": "TCP" 00:28:01.217 } 00:28:01.217 ] 00:28:01.217 }, 00:28:01.217 { 00:28:01.217 "name": "nvmf_tgt_poll_group_003", 00:28:01.217 "admin_qpairs": 0, 00:28:01.217 "io_qpairs": 0, 00:28:01.217 "current_admin_qpairs": 0, 00:28:01.217 "current_io_qpairs": 0, 00:28:01.217 "pending_bdev_io": 0, 00:28:01.217 "completed_nvme_io": 0, 00:28:01.217 "transports": [ 00:28:01.217 { 00:28:01.217 "trtype": "TCP" 00:28:01.217 } 00:28:01.217 ] 00:28:01.217 } 00:28:01.217 ] 00:28:01.217 }' 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:01.217 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3049426 00:28:09.332 Initializing NVMe Controllers 00:28:09.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:09.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:09.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:09.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:09.332 Initialization complete. Launching workers. 00:28:09.332 ======================================================== 00:28:09.332 Latency(us) 00:28:09.332 Device Information : IOPS MiB/s Average min max 00:28:09.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4805.80 18.77 13321.06 2830.11 58203.95 00:28:09.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5460.70 21.33 11762.57 2269.31 57552.20 00:28:09.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5865.30 22.91 10915.43 2295.22 56350.47 00:28:09.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5145.00 20.10 12440.98 2142.65 57831.72 00:28:09.332 ======================================================== 00:28:09.333 Total : 21276.80 83.11 12045.11 2142.65 58203.95 00:28:09.333 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.333 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.333 rmmod nvme_tcp 00:28:09.333 rmmod nvme_fabrics 00:28:09.333 rmmod nvme_keyring 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3049151 ']' 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3049151 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3049151 ']' 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3049151 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049151 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049151' 00:28:09.333 killing process with pid 3049151 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3049151 00:28:09.333 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3049151 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.708 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:12.721 00:28:12.721 real 0m49.473s 00:28:12.721 user 2m55.355s 00:28:12.721 sys 0m9.474s 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.721 ************************************ 00:28:12.721 END TEST nvmf_perf_adq 00:28:12.721 ************************************ 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:12.721 ************************************ 00:28:12.721 START TEST nvmf_shutdown 00:28:12.721 ************************************ 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:12.721 * Looking for test storage... 00:28:12.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:12.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.721 --rc genhtml_branch_coverage=1 00:28:12.721 --rc genhtml_function_coverage=1 00:28:12.721 --rc genhtml_legend=1 00:28:12.721 --rc geninfo_all_blocks=1 00:28:12.721 --rc geninfo_unexecuted_blocks=1 00:28:12.721 00:28:12.721 ' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:12.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.721 --rc genhtml_branch_coverage=1 00:28:12.721 --rc genhtml_function_coverage=1 00:28:12.721 --rc genhtml_legend=1 00:28:12.721 --rc geninfo_all_blocks=1 00:28:12.721 --rc geninfo_unexecuted_blocks=1 00:28:12.721 00:28:12.721 ' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:12.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.721 --rc genhtml_branch_coverage=1 00:28:12.721 --rc genhtml_function_coverage=1 00:28:12.721 --rc genhtml_legend=1 00:28:12.721 --rc geninfo_all_blocks=1 00:28:12.721 --rc geninfo_unexecuted_blocks=1 00:28:12.721 00:28:12.721 ' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:12.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.721 --rc genhtml_branch_coverage=1 00:28:12.721 --rc genhtml_function_coverage=1 00:28:12.721 --rc genhtml_legend=1 00:28:12.721 --rc geninfo_all_blocks=1 00:28:12.721 --rc geninfo_unexecuted_blocks=1 00:28:12.721 00:28:12.721 ' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.721 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:12.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.722 ************************************ 00:28:12.722 START TEST nvmf_shutdown_tc1 00:28:12.722 ************************************ 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.722 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.624 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:14.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:14.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:14.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:14.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.625 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:28:14.884 00:28:14.884 --- 10.0.0.2 ping statistics --- 00:28:14.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.884 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:28:14.884 00:28:14.884 --- 10.0.0.1 ping statistics --- 00:28:14.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.884 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3052726 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3052726 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3052726 ']' 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.884 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:14.884 [2024-11-17 09:28:19.828491] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:14.884 [2024-11-17 09:28:19.828646] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.142 [2024-11-17 09:28:19.979833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.142 [2024-11-17 09:28:20.131945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.142 [2024-11-17 09:28:20.132044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.142 [2024-11-17 09:28:20.132070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.142 [2024-11-17 09:28:20.132093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.142 [2024-11-17 09:28:20.132113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.142 [2024-11-17 09:28:20.134945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.142 [2024-11-17 09:28:20.135044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.142 [2024-11-17 09:28:20.135108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.142 [2024-11-17 09:28:20.135114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.075 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.076 [2024-11-17 09:28:20.818192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.076 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.076 Malloc1 00:28:16.076 [2024-11-17 09:28:20.967036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.076 Malloc2 00:28:16.333 Malloc3 00:28:16.333 Malloc4 00:28:16.591 Malloc5 00:28:16.591 Malloc6 00:28:16.591 Malloc7 00:28:16.849 Malloc8 00:28:16.849 Malloc9 00:28:16.849 Malloc10 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3053031 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3053031 /var/tmp/bdevperf.sock 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3053031 ']' 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:17.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.107 { 00:28:17.107 "params": { 00:28:17.107 "name": "Nvme$subsystem", 00:28:17.107 "trtype": "$TEST_TRANSPORT", 00:28:17.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.107 "adrfam": "ipv4", 00:28:17.107 "trsvcid": "$NVMF_PORT", 00:28:17.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.107 "hdgst": ${hdgst:-false}, 00:28:17.107 "ddgst": ${ddgst:-false} 00:28:17.107 }, 00:28:17.107 "method": "bdev_nvme_attach_controller" 00:28:17.107 } 00:28:17.107 EOF 00:28:17.107 )") 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.107 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.107 { 00:28:17.107 "params": { 00:28:17.107 "name": "Nvme$subsystem", 00:28:17.107 "trtype": "$TEST_TRANSPORT", 00:28:17.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.107 "adrfam": "ipv4", 00:28:17.107 "trsvcid": "$NVMF_PORT", 00:28:17.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.107 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.108 { 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme$subsystem", 00:28:17.108 "trtype": "$TEST_TRANSPORT", 00:28:17.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "$NVMF_PORT", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.108 "hdgst": ${hdgst:-false}, 00:28:17.108 "ddgst": ${ddgst:-false} 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 } 00:28:17.108 EOF 00:28:17.108 )") 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:17.108 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme1", 00:28:17.108 "trtype": "tcp", 00:28:17.108 "traddr": "10.0.0.2", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "4420", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.108 "hdgst": false, 00:28:17.108 "ddgst": false 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 },{ 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme2", 00:28:17.108 "trtype": "tcp", 00:28:17.108 "traddr": "10.0.0.2", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "4420", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:17.108 "hdgst": false, 00:28:17.108 "ddgst": false 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 },{ 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme3", 00:28:17.108 "trtype": "tcp", 00:28:17.108 "traddr": "10.0.0.2", 00:28:17.108 "adrfam": "ipv4", 00:28:17.108 "trsvcid": "4420", 00:28:17.108 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:17.108 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:17.108 "hdgst": false, 00:28:17.108 "ddgst": false 00:28:17.108 }, 00:28:17.108 "method": "bdev_nvme_attach_controller" 00:28:17.108 },{ 00:28:17.108 "params": { 00:28:17.108 "name": "Nvme4", 00:28:17.108 "trtype": "tcp", 00:28:17.108 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 },{ 00:28:17.109 "params": { 00:28:17.109 "name": "Nvme5", 00:28:17.109 "trtype": "tcp", 00:28:17.109 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 },{ 00:28:17.109 "params": { 00:28:17.109 "name": "Nvme6", 00:28:17.109 "trtype": "tcp", 00:28:17.109 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 },{ 00:28:17.109 "params": { 00:28:17.109 "name": "Nvme7", 00:28:17.109 "trtype": "tcp", 00:28:17.109 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 },{ 00:28:17.109 "params": { 00:28:17.109 "name": "Nvme8", 00:28:17.109 "trtype": "tcp", 00:28:17.109 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 },{ 00:28:17.109 "params": { 00:28:17.109 "name": "Nvme9", 00:28:17.109 "trtype": "tcp", 00:28:17.109 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 },{ 00:28:17.109 "params": { 00:28:17.109 "name": "Nvme10", 00:28:17.109 "trtype": "tcp", 00:28:17.109 "traddr": "10.0.0.2", 00:28:17.109 "adrfam": "ipv4", 00:28:17.109 "trsvcid": "4420", 00:28:17.109 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:17.109 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:17.109 "hdgst": false, 00:28:17.109 "ddgst": false 00:28:17.109 }, 00:28:17.109 "method": "bdev_nvme_attach_controller" 00:28:17.109 }' 00:28:17.109 [2024-11-17 09:28:21.984726] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:17.109 [2024-11-17 09:28:21.984879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:17.367 [2024-11-17 09:28:22.132417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.367 [2024-11-17 09:28:22.261788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3053031 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:19.266 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:20.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3053031 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3052726 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.200 { 00:28:20.200 "params": { 00:28:20.200 "name": "Nvme$subsystem", 00:28:20.200 "trtype": "$TEST_TRANSPORT", 00:28:20.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.200 "adrfam": "ipv4", 00:28:20.200 "trsvcid": "$NVMF_PORT", 00:28:20.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.200 "hdgst": ${hdgst:-false}, 00:28:20.200 "ddgst": ${ddgst:-false} 00:28:20.200 }, 00:28:20.200 "method": "bdev_nvme_attach_controller" 00:28:20.200 } 00:28:20.200 EOF 00:28:20.200 )") 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.200 { 00:28:20.200 "params": { 00:28:20.200 "name": "Nvme$subsystem", 00:28:20.200 "trtype": "$TEST_TRANSPORT", 00:28:20.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.200 "adrfam": "ipv4", 00:28:20.200 "trsvcid": "$NVMF_PORT", 00:28:20.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.200 "hdgst": ${hdgst:-false}, 00:28:20.200 "ddgst": ${ddgst:-false} 00:28:20.200 }, 00:28:20.200 "method": "bdev_nvme_attach_controller" 00:28:20.200 } 00:28:20.200 EOF 00:28:20.200 )") 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.200 { 00:28:20.200 "params": { 00:28:20.200 "name": "Nvme$subsystem", 00:28:20.200 "trtype": "$TEST_TRANSPORT", 00:28:20.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.200 "adrfam": "ipv4", 00:28:20.200 "trsvcid": "$NVMF_PORT", 00:28:20.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.200 "hdgst": ${hdgst:-false}, 00:28:20.200 "ddgst": ${ddgst:-false} 00:28:20.200 }, 00:28:20.200 "method": "bdev_nvme_attach_controller" 00:28:20.200 } 00:28:20.200 EOF 00:28:20.200 )") 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.200 { 00:28:20.200 "params": { 00:28:20.200 "name": "Nvme$subsystem", 00:28:20.200 "trtype": "$TEST_TRANSPORT", 00:28:20.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.200 "adrfam": "ipv4", 00:28:20.200 "trsvcid": "$NVMF_PORT", 00:28:20.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.200 "hdgst": ${hdgst:-false}, 00:28:20.200 "ddgst": ${ddgst:-false} 00:28:20.200 }, 00:28:20.200 "method": "bdev_nvme_attach_controller" 00:28:20.200 } 00:28:20.200 EOF 00:28:20.200 )") 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.200 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.200 { 00:28:20.200 "params": { 00:28:20.200 "name": "Nvme$subsystem", 00:28:20.200 "trtype": "$TEST_TRANSPORT", 00:28:20.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.200 "adrfam": "ipv4", 00:28:20.200 "trsvcid": "$NVMF_PORT", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.201 "hdgst": ${hdgst:-false}, 00:28:20.201 "ddgst": ${ddgst:-false} 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 } 00:28:20.201 EOF 00:28:20.201 )") 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.201 { 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme$subsystem", 00:28:20.201 "trtype": "$TEST_TRANSPORT", 00:28:20.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "$NVMF_PORT", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.201 "hdgst": ${hdgst:-false}, 00:28:20.201 "ddgst": ${ddgst:-false} 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 } 00:28:20.201 EOF 00:28:20.201 )") 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.201 { 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme$subsystem", 00:28:20.201 "trtype": "$TEST_TRANSPORT", 00:28:20.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "$NVMF_PORT", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.201 "hdgst": ${hdgst:-false}, 00:28:20.201 "ddgst": ${ddgst:-false} 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 } 00:28:20.201 EOF 00:28:20.201 )") 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.201 { 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme$subsystem", 00:28:20.201 "trtype": "$TEST_TRANSPORT", 00:28:20.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "$NVMF_PORT", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.201 "hdgst": ${hdgst:-false}, 00:28:20.201 "ddgst": ${ddgst:-false} 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 } 00:28:20.201 EOF 00:28:20.201 )") 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.201 { 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme$subsystem", 00:28:20.201 "trtype": "$TEST_TRANSPORT", 00:28:20.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "$NVMF_PORT", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.201 "hdgst": ${hdgst:-false}, 00:28:20.201 "ddgst": ${ddgst:-false} 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 } 00:28:20.201 EOF 00:28:20.201 )") 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.201 { 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme$subsystem", 00:28:20.201 "trtype": "$TEST_TRANSPORT", 00:28:20.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "$NVMF_PORT", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.201 "hdgst": ${hdgst:-false}, 00:28:20.201 "ddgst": ${ddgst:-false} 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 } 00:28:20.201 EOF 00:28:20.201 )") 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:20.201 09:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme1", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme2", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme3", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme4", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme5", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme6", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme7", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme8", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.201 },{ 00:28:20.201 "params": { 00:28:20.201 "name": "Nvme9", 00:28:20.201 "trtype": "tcp", 00:28:20.201 "traddr": "10.0.0.2", 00:28:20.201 "adrfam": "ipv4", 00:28:20.201 "trsvcid": "4420", 00:28:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:20.201 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:20.201 "hdgst": false, 00:28:20.201 "ddgst": false 00:28:20.201 }, 00:28:20.201 "method": "bdev_nvme_attach_controller" 00:28:20.202 },{ 00:28:20.202 "params": { 00:28:20.202 "name": "Nvme10", 00:28:20.202 "trtype": "tcp", 00:28:20.202 "traddr": "10.0.0.2", 00:28:20.202 "adrfam": "ipv4", 00:28:20.202 "trsvcid": "4420", 00:28:20.202 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:20.202 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:20.202 "hdgst": false, 00:28:20.202 "ddgst": false 00:28:20.202 }, 00:28:20.202 "method": "bdev_nvme_attach_controller" 00:28:20.202 }' 00:28:20.202 [2024-11-17 09:28:25.051750] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:20.202 [2024-11-17 09:28:25.051909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053334 ] 00:28:20.202 [2024-11-17 09:28:25.198860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.460 [2024-11-17 09:28:25.329566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.357 Running I/O for 1 seconds... 00:28:23.550 1472.00 IOPS, 92.00 MiB/s 00:28:23.550 Latency(us) 00:28:23.550 [2024-11-17T08:28:28.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.550 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme1n1 : 1.20 213.04 13.32 0.00 0.00 294684.44 38253.61 271853.04 00:28:23.550 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme2n1 : 1.10 174.10 10.88 0.00 0.00 355303.22 39418.69 265639.25 00:28:23.550 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme3n1 : 1.21 211.83 13.24 0.00 0.00 287249.26 21456.97 323116.75 00:28:23.550 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme4n1 : 1.22 209.80 13.11 0.00 0.00 286984.53 20194.80 320009.86 00:28:23.550 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme5n1 : 1.16 164.91 10.31 0.00 0.00 357559.56 23884.23 323116.75 00:28:23.550 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme6n1 : 1.21 214.79 13.42 0.00 0.00 267341.70 13301.38 273406.48 00:28:23.550 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme7n1 : 1.23 207.52 12.97 0.00 0.00 275519.34 20680.25 282727.16 00:28:23.550 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme8n1 : 1.15 171.14 10.70 0.00 0.00 322478.46 3495.25 299815.06 00:28:23.550 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme9n1 : 1.23 208.72 13.05 0.00 0.00 263874.56 24175.50 309135.74 00:28:23.550 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.550 Verification LBA range: start 0x0 length 0x400 00:28:23.550 Nvme10n1 : 1.24 206.45 12.90 0.00 0.00 262300.63 20486.07 335544.32 00:28:23.550 [2024-11-17T08:28:28.563Z] =================================================================================================================== 00:28:23.550 [2024-11-17T08:28:28.563Z] Total : 1982.30 123.89 0.00 0.00 293472.28 3495.25 335544.32 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.483 rmmod nvme_tcp 00:28:24.483 rmmod nvme_fabrics 00:28:24.483 rmmod nvme_keyring 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3052726 ']' 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3052726 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3052726 ']' 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3052726 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:24.483 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.484 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3052726 00:28:24.484 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.484 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.484 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3052726' 00:28:24.484 killing process with pid 3052726 00:28:24.484 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3052726 00:28:24.484 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3052726 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.763 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.664 00:28:29.664 real 0m16.637s 00:28:29.664 user 0m53.006s 00:28:29.664 sys 0m3.821s 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.664 ************************************ 00:28:29.664 END TEST nvmf_shutdown_tc1 00:28:29.664 ************************************ 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:29.664 ************************************ 00:28:29.664 START TEST nvmf_shutdown_tc2 00:28:29.664 ************************************ 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:29.664 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:29.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:29.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:29.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:29.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.665 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:28:29.666 00:28:29.666 --- 10.0.0.2 ping statistics --- 00:28:29.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.666 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:28:29.666 00:28:29.666 --- 10.0.0.1 ping statistics --- 00:28:29.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.666 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3054614 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3054614 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3054614 ']' 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.666 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.666 [2024-11-17 09:28:34.556510] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:29.666 [2024-11-17 09:28:34.556652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.924 [2024-11-17 09:28:34.719758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.924 [2024-11-17 09:28:34.859552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.924 [2024-11-17 09:28:34.859642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.924 [2024-11-17 09:28:34.859668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.924 [2024-11-17 09:28:34.859693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.924 [2024-11-17 09:28:34.859713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.924 [2024-11-17 09:28:34.862857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.924 [2024-11-17 09:28:34.862972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.924 [2024-11-17 09:28:34.863016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.924 [2024-11-17 09:28:34.863023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.858 [2024-11-17 09:28:35.583483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:30.858 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.859 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.859 Malloc1 00:28:30.859 [2024-11-17 09:28:35.733574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.859 Malloc2 00:28:31.116 Malloc3 00:28:31.116 Malloc4 00:28:31.116 Malloc5 00:28:31.374 Malloc6 00:28:31.374 Malloc7 00:28:31.632 Malloc8 00:28:31.632 Malloc9 00:28:31.890 Malloc10 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3054925 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3054925 /var/tmp/bdevperf.sock 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3054925 ']' 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:31.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.891 "hdgst": ${hdgst:-false}, 00:28:31.891 "ddgst": ${ddgst:-false} 00:28:31.891 }, 00:28:31.891 "method": "bdev_nvme_attach_controller" 00:28:31.891 } 00:28:31.891 EOF 00:28:31.891 )") 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.891 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.891 { 00:28:31.891 "params": { 00:28:31.891 "name": "Nvme$subsystem", 00:28:31.891 "trtype": "$TEST_TRANSPORT", 00:28:31.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.891 "adrfam": "ipv4", 00:28:31.891 "trsvcid": "$NVMF_PORT", 00:28:31.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.892 "hdgst": ${hdgst:-false}, 00:28:31.892 "ddgst": ${ddgst:-false} 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 } 00:28:31.892 EOF 00:28:31.892 )") 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:31.892 { 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme$subsystem", 00:28:31.892 "trtype": "$TEST_TRANSPORT", 00:28:31.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "$NVMF_PORT", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.892 "hdgst": ${hdgst:-false}, 00:28:31.892 "ddgst": ${ddgst:-false} 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 } 00:28:31.892 EOF 00:28:31.892 )") 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:31.892 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme1", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme2", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme3", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme4", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme5", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme6", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme7", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme8", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme9", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 },{ 00:28:31.892 "params": { 00:28:31.892 "name": "Nvme10", 00:28:31.892 "trtype": "tcp", 00:28:31.892 "traddr": "10.0.0.2", 00:28:31.892 "adrfam": "ipv4", 00:28:31.892 "trsvcid": "4420", 00:28:31.892 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:31.892 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:31.892 "hdgst": false, 00:28:31.892 "ddgst": false 00:28:31.892 }, 00:28:31.892 "method": "bdev_nvme_attach_controller" 00:28:31.892 }' 00:28:31.892 [2024-11-17 09:28:36.783246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:31.892 [2024-11-17 09:28:36.783397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054925 ] 00:28:32.150 [2024-11-17 09:28:36.922507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.150 [2024-11-17 09:28:37.051468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.049 Running I/O for 10 seconds... 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.619 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.877 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.877 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:34.877 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:34.877 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:35.135 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:35.135 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:35.135 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3054925 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3054925 ']' 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3054925 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054925 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054925' 00:28:35.136 killing process with pid 3054925 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3054925 00:28:35.136 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3054925 00:28:35.136 1678.00 IOPS, 104.88 MiB/s [2024-11-17T08:28:40.149Z] Received shutdown signal, test time was about 1.056795 seconds 00:28:35.136 00:28:35.136 Latency(us) 00:28:35.136 [2024-11-17T08:28:40.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.136 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme1n1 : 1.00 195.41 12.21 0.00 0.00 320533.95 2730.67 299815.06 00:28:35.136 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme2n1 : 0.98 195.61 12.23 0.00 0.00 316327.76 21845.33 302921.96 00:28:35.136 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme3n1 : 1.05 243.30 15.21 0.00 0.00 249152.09 21359.88 301368.51 00:28:35.136 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme4n1 : 0.98 201.46 12.59 0.00 0.00 291562.44 3859.34 273406.48 00:28:35.136 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme5n1 : 1.04 184.22 11.51 0.00 0.00 316802.34 23592.96 323116.75 00:28:35.136 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme6n1 : 1.05 183.17 11.45 0.00 0.00 312323.54 26020.22 351078.78 00:28:35.136 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme7n1 : 1.02 192.41 12.03 0.00 0.00 272241.15 15922.82 301368.51 00:28:35.136 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme8n1 : 1.03 186.93 11.68 0.00 0.00 292156.81 19612.25 281173.71 00:28:35.136 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme9n1 : 1.00 191.94 12.00 0.00 0.00 276847.12 24855.13 298261.62 00:28:35.136 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.136 Verification LBA range: start 0x0 length 0x400 00:28:35.136 Nvme10n1 : 1.03 185.55 11.60 0.00 0.00 281683.94 24369.68 321563.31 00:28:35.136 [2024-11-17T08:28:40.149Z] =================================================================================================================== 00:28:35.136 [2024-11-17T08:28:40.149Z] Total : 1960.00 122.50 0.00 0.00 291559.59 2730.67 351078.78 00:28:36.069 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3054614 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.440 rmmod nvme_tcp 00:28:37.440 rmmod nvme_fabrics 00:28:37.440 rmmod nvme_keyring 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3054614 ']' 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3054614 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3054614 ']' 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3054614 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054614 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054614' 00:28:37.440 killing process with pid 3054614 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3054614 00:28:37.440 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3054614 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.969 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.502 00:28:42.502 real 0m12.630s 00:28:42.502 user 0m42.943s 00:28:42.502 sys 0m2.007s 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.502 ************************************ 00:28:42.502 END TEST nvmf_shutdown_tc2 00:28:42.502 ************************************ 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.502 ************************************ 00:28:42.502 START TEST nvmf_shutdown_tc3 00:28:42.502 ************************************ 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.502 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:42.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:42.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:42.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:42.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.503 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:42.503 00:28:42.503 --- 10.0.0.2 ping statistics --- 00:28:42.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.503 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:28:42.503 00:28:42.503 --- 10.0.0.1 ping statistics --- 00:28:42.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.503 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3056242 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3056242 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3056242 ']' 00:28:42.503 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.504 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.504 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.504 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.504 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.504 [2024-11-17 09:28:47.249531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:42.504 [2024-11-17 09:28:47.249669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.504 [2024-11-17 09:28:47.398734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.762 [2024-11-17 09:28:47.526929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.762 [2024-11-17 09:28:47.526998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.762 [2024-11-17 09:28:47.527021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.762 [2024-11-17 09:28:47.527041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.762 [2024-11-17 09:28:47.527059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.762 [2024-11-17 09:28:47.529624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.762 [2024-11-17 09:28:47.529676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.762 [2024-11-17 09:28:47.529716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.762 [2024-11-17 09:28:47.529724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.328 [2024-11-17 09:28:48.266972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.328 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.329 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:43.587 Malloc1 00:28:43.587 [2024-11-17 09:28:48.413233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.587 Malloc2 00:28:43.587 Malloc3 00:28:43.844 Malloc4 00:28:43.844 Malloc5 00:28:44.102 Malloc6 00:28:44.102 Malloc7 00:28:44.367 Malloc8 00:28:44.367 Malloc9 00:28:44.367 Malloc10 00:28:44.367 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.367 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:44.367 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.367 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3056557 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3056557 /var/tmp/bdevperf.sock 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3056557 ']' 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.368 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.368 { 00:28:44.368 "params": { 00:28:44.368 "name": "Nvme$subsystem", 00:28:44.368 "trtype": "$TEST_TRANSPORT", 00:28:44.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.368 "adrfam": "ipv4", 00:28:44.368 "trsvcid": "$NVMF_PORT", 00:28:44.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.368 "hdgst": ${hdgst:-false}, 00:28:44.369 "ddgst": ${ddgst:-false} 00:28:44.369 }, 00:28:44.369 "method": "bdev_nvme_attach_controller" 00:28:44.369 } 00:28:44.369 EOF 00:28:44.369 )") 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.369 { 00:28:44.369 "params": { 00:28:44.369 "name": "Nvme$subsystem", 00:28:44.369 "trtype": "$TEST_TRANSPORT", 00:28:44.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.369 "adrfam": "ipv4", 00:28:44.369 "trsvcid": "$NVMF_PORT", 00:28:44.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.369 "hdgst": ${hdgst:-false}, 00:28:44.369 "ddgst": ${ddgst:-false} 00:28:44.369 }, 00:28:44.369 "method": "bdev_nvme_attach_controller" 00:28:44.369 } 00:28:44.369 EOF 00:28:44.369 )") 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.369 { 00:28:44.369 "params": { 00:28:44.369 "name": "Nvme$subsystem", 00:28:44.369 "trtype": "$TEST_TRANSPORT", 00:28:44.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.369 "adrfam": "ipv4", 00:28:44.369 "trsvcid": "$NVMF_PORT", 00:28:44.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.369 "hdgst": ${hdgst:-false}, 00:28:44.369 "ddgst": ${ddgst:-false} 00:28:44.369 }, 00:28:44.369 "method": "bdev_nvme_attach_controller" 00:28:44.369 } 00:28:44.369 EOF 00:28:44.369 )") 00:28:44.369 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.628 { 00:28:44.628 "params": { 00:28:44.628 "name": "Nvme$subsystem", 00:28:44.628 "trtype": "$TEST_TRANSPORT", 00:28:44.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.628 "adrfam": "ipv4", 00:28:44.628 "trsvcid": "$NVMF_PORT", 00:28:44.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.628 "hdgst": ${hdgst:-false}, 00:28:44.628 "ddgst": ${ddgst:-false} 00:28:44.628 }, 00:28:44.628 "method": "bdev_nvme_attach_controller" 00:28:44.628 } 00:28:44.628 EOF 00:28:44.628 )") 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.628 { 00:28:44.628 "params": { 00:28:44.628 "name": "Nvme$subsystem", 00:28:44.628 "trtype": "$TEST_TRANSPORT", 00:28:44.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.628 "adrfam": "ipv4", 00:28:44.628 "trsvcid": "$NVMF_PORT", 00:28:44.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.628 "hdgst": ${hdgst:-false}, 00:28:44.628 "ddgst": ${ddgst:-false} 00:28:44.628 }, 00:28:44.628 "method": "bdev_nvme_attach_controller" 00:28:44.628 } 00:28:44.628 EOF 00:28:44.628 )") 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.628 { 00:28:44.628 "params": { 00:28:44.628 "name": "Nvme$subsystem", 00:28:44.628 "trtype": "$TEST_TRANSPORT", 00:28:44.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.628 "adrfam": "ipv4", 00:28:44.628 "trsvcid": "$NVMF_PORT", 00:28:44.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.628 "hdgst": ${hdgst:-false}, 00:28:44.628 "ddgst": ${ddgst:-false} 00:28:44.628 }, 00:28:44.628 "method": "bdev_nvme_attach_controller" 00:28:44.628 } 00:28:44.628 EOF 00:28:44.628 )") 00:28:44.628 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.629 { 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme$subsystem", 00:28:44.629 "trtype": "$TEST_TRANSPORT", 00:28:44.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "$NVMF_PORT", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.629 "hdgst": ${hdgst:-false}, 00:28:44.629 "ddgst": ${ddgst:-false} 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 } 00:28:44.629 EOF 00:28:44.629 )") 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.629 { 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme$subsystem", 00:28:44.629 "trtype": "$TEST_TRANSPORT", 00:28:44.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "$NVMF_PORT", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.629 "hdgst": ${hdgst:-false}, 00:28:44.629 "ddgst": ${ddgst:-false} 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 } 00:28:44.629 EOF 00:28:44.629 )") 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.629 { 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme$subsystem", 00:28:44.629 "trtype": "$TEST_TRANSPORT", 00:28:44.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "$NVMF_PORT", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.629 "hdgst": ${hdgst:-false}, 00:28:44.629 "ddgst": ${ddgst:-false} 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 } 00:28:44.629 EOF 00:28:44.629 )") 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.629 { 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme$subsystem", 00:28:44.629 "trtype": "$TEST_TRANSPORT", 00:28:44.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "$NVMF_PORT", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.629 "hdgst": ${hdgst:-false}, 00:28:44.629 "ddgst": ${ddgst:-false} 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 } 00:28:44.629 EOF 00:28:44.629 )") 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:44.629 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme1", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme2", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme3", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme4", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme5", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme6", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme7", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme8", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme9", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 },{ 00:28:44.629 "params": { 00:28:44.629 "name": "Nvme10", 00:28:44.629 "trtype": "tcp", 00:28:44.629 "traddr": "10.0.0.2", 00:28:44.629 "adrfam": "ipv4", 00:28:44.629 "trsvcid": "4420", 00:28:44.629 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:44.629 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:44.629 "hdgst": false, 00:28:44.629 "ddgst": false 00:28:44.629 }, 00:28:44.629 "method": "bdev_nvme_attach_controller" 00:28:44.629 }' 00:28:44.629 [2024-11-17 09:28:49.452116] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:44.629 [2024-11-17 09:28:49.452270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056557 ] 00:28:44.629 [2024-11-17 09:28:49.594846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.887 [2024-11-17 09:28:49.722847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.786 Running I/O for 10 seconds... 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:47.352 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:47.611 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:47.611 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:47.611 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:47.611 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.611 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.611 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3056242 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3056242 ']' 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3056242 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056242 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056242' 00:28:47.612 killing process with pid 3056242 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3056242 00:28:47.612 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3056242 00:28:47.612 [2024-11-17 09:28:52.606957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.607990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.608292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:28:47.612 [2024-11-17 09:28:52.611002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.611985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.612290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:28:47.613 [2024-11-17 09:28:52.620339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.613 [2024-11-17 09:28:52.620807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.613 [2024-11-17 09:28:52.620832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.614 [2024-11-17 09:28:52.620854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.614 [2024-11-17 09:28:52.620879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.614 [2024-11-17 09:28:52.620902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.614 [2024-11-17 09:28:52.620927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.614 [2024-11-17 09:28:52.620959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.614 [2024-11-17 09:28:52.620956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.620986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.614 [2024-11-17 09:28:52.620999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.621010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.614 [2024-11-17 09:28:52.621022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.621036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.614 [2024-11-17 09:28:52.621041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.621059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 09:28:52.621061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.614 with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.621083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.621088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.614 [2024-11-17 09:28:52.621101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.614 [2024-11-17 09:28:52.621111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.889 [2024-11-17 09:28:52.621121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1[2024-11-17 09:28:52.621159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.889 with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.889 [2024-11-17 09:28:52.621198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.889 [2024-11-17 09:28:52.621218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.889 [2024-11-17 09:28:52.621238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.889 [2024-11-17 09:28:52.621264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.889 [2024-11-17 09:28:52.621283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-11-17 09:28:52.621361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-17 09:28:52.621393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:28:47.890 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-17 09:28:52.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1with the state(6) to be set 00:28:47.890 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-11-17 09:28:52.621638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-17 09:28:52.621943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1with the state(6) to be set 00:28:47.890 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.621966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.621985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.621995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.622018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.622043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.622044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:28:47.890 [2024-11-17 09:28:52.622067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.890 [2024-11-17 09:28:52.622318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.890 [2024-11-17 09:28:52.622343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.622962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.622987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 [2024-11-17 09:28:52.623703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-11-17 09:28:52.623726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 09:28:52.623716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.891 with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.891 [2024-11-17 09:28:52.623817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.623989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.891 [2024-11-17 09:28:52.624204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.624972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.627998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.892 [2024-11-17 09:28:52.628477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.628895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.632988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.893 [2024-11-17 09:28:52.633616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.633635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.633654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.633693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.633713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 1440.00 IOPS, 90.00 MiB/s [2024-11-17T08:28:52.907Z] [2024-11-17 09:28:52.636678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 09:28:52.636766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nswith the state(6) to be set 00:28:47.894 id:0 cdw10:00000000 cdw11:00000000 00:28:47.894 [2024-11-17 09:28:52.636799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.894 [2024-11-17 09:28:52.636819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-17 09:28:52.636839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same id:0 cdw10:00000000 cdw11:00000000 00:28:47.894 with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 09:28:52.636861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(6) to be set 00:28:47.894 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.894 [2024-11-17 09:28:52.636889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.894 [2024-11-17 09:28:52.636908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.894 [2024-11-17 09:28:52.636927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.894 [2024-11-17 09:28:52.636947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.894 [2024-11-17 09:28:52.636965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.636984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-17 09:28:52.636984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is with the state(6) to be set 00:28:47.894 same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.637006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.637025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.637043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.637062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.637080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.894 [2024-11-17 09:28:52.637089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.894 [2024-11-17 09:28:52.637099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 09:28:52.637119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 09:28:52.637216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 09:28:52.637400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.637901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.637975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.637997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.638156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.638420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.638676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.895 [2024-11-17 09:28:52.638835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.895 [2024-11-17 09:28:52.638855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:28:47.895 [2024-11-17 09:28:52.638899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.638925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.896 [2024-11-17 09:28:52.638937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.638953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.638959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.638978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.896 [2024-11-17 09:28:52.638985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.638999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.896 [2024-11-17 09:28:52.639027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-17 09:28:52.639066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same id:0 cdw10:00000000 cdw11:00000000 00:28:47.896 with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(6) to be set 00:28:47.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is with the state(6) to be set 00:28:47.896 same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12with the state(6) to be set 00:28:47.896 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 09:28:52.639486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-11-17 09:28:52.639565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:28:47.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12with the state(6) to be set 00:28:47.896 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:28:47.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-11-17 09:28:52.639789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:28:47.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-11-17 09:28:52.639888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.639909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:28:47.896 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 [2024-11-17 09:28:52.639936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.896 [2024-11-17 09:28:52.639939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-17 09:28:52.639960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.896 with the state(6) to be set 00:28:47.896 [2024-11-17 09:28:52.639980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.639985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.639999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.640111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:28:47.897 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-17 09:28:52.640187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1with the state(6) to be set 00:28:47.897 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:28:47.897 [2024-11-17 09:28:52.640212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.640959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.640987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.897 [2024-11-17 09:28:52.641618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.897 [2024-11-17 09:28:52.641640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.641963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.641985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.642653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.642690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.644971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.898 [2024-11-17 09:28:52.644993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.898 [2024-11-17 09:28:52.645017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.645965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.645987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.646011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.646032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.646056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.646078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.646102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.646137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.663973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.663995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.664021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.664043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.664067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.664089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.664114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.664152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.664178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.664201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.664227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.664250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.899 [2024-11-17 09:28:52.664275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.899 [2024-11-17 09:28:52.664297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.664912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.664935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.666015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:47.900 [2024-11-17 09:28:52.666091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.900 [2024-11-17 09:28:52.666294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.666319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.900 [2024-11-17 09:28:52.666346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.666379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.900 [2024-11-17 09:28:52.666403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.666426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.900 [2024-11-17 09:28:52.666447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.666468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:28:47.900 [2024-11-17 09:28:52.666515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.666771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.669998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:47.900 [2024-11-17 09:28:52.670041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:47.900 [2024-11-17 09:28:52.671559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.900 [2024-11-17 09:28:52.671613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:28:47.900 [2024-11-17 09:28:52.671641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:28:47.900 [2024-11-17 09:28:52.671789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.900 [2024-11-17 09:28:52.671824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:28:47.900 [2024-11-17 09:28:52.671848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:28:47.900 [2024-11-17 09:28:52.671972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.900 [2024-11-17 09:28:52.672007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:28:47.900 [2024-11-17 09:28:52.672030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:28:47.900 [2024-11-17 09:28:52.673417] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.673821] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.673871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.673909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.673946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.674070] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.674501] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.674606] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.674704] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.674789] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:47.900 [2024-11-17 09:28:52.674852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:47.900 [2024-11-17 09:28:52.674882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:47.900 [2024-11-17 09:28:52.674907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:47.900 [2024-11-17 09:28:52.674934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:47.900 [2024-11-17 09:28:52.674960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:47.900 [2024-11-17 09:28:52.674980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:47.900 [2024-11-17 09:28:52.675000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:47.900 [2024-11-17 09:28:52.675019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:47.900 [2024-11-17 09:28:52.675040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:47.900 [2024-11-17 09:28:52.675060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:47.900 [2024-11-17 09:28:52.675079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:47.900 [2024-11-17 09:28:52.675097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:47.900 [2024-11-17 09:28:52.676052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:28:47.900 [2024-11-17 09:28:52.676348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.676406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.676447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.676473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.676500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.900 [2024-11-17 09:28:52.676523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.900 [2024-11-17 09:28:52.676549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.676971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.676994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.677955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.677980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.901 [2024-11-17 09:28:52.678519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.901 [2024-11-17 09:28:52.678546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.678959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.678980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.679572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.679594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:28:47.902 [2024-11-17 09:28:52.681144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.681964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.681991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.682017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.682039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.902 [2024-11-17 09:28:52.682063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.902 [2024-11-17 09:28:52.682085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.682971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.682994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.903 [2024-11-17 09:28:52.683787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.903 [2024-11-17 09:28:52.683812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.683835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.683864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.683888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.683913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.683935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.683961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.683984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.684392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.684418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:28:47.904 [2024-11-17 09:28:52.685959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.685990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.686974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.686999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.904 [2024-11-17 09:28:52.687305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.904 [2024-11-17 09:28:52.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.687956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.687980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.688966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.689013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.689035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.689060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.689082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.689106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.689129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.689156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:28:47.905 [2024-11-17 09:28:52.690716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.690746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.905 [2024-11-17 09:28:52.690775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.905 [2024-11-17 09:28:52.690798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.690824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.690846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.690870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.690892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.690916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.690938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.690962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.690984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.691947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.691972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.906 [2024-11-17 09:28:52.692756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.906 [2024-11-17 09:28:52.692780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.692802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.692826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.692848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.692871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.692893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.692917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.692939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.692963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.692985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.693878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.693899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:28:47.907 [2024-11-17 09:28:52.695447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.695996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.696020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.696042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.696067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.696089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.907 [2024-11-17 09:28:52.696113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.907 [2024-11-17 09:28:52.696135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.696958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.696980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.908 [2024-11-17 09:28:52.697914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.908 [2024-11-17 09:28:52.697938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.697959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.697983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.698591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.698613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:28:47.909 [2024-11-17 09:28:52.700153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.700976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.700997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.909 [2024-11-17 09:28:52.701326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.909 [2024-11-17 09:28:52.701350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.701965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.701989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.702010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.702033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.702054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.702078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.702101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.702125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.711958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.711984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.712006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.712032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.712054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.712080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.712102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.712127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.712151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.910 [2024-11-17 09:28:52.712176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.910 [2024-11-17 09:28:52.712198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.712221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:28:47.911 [2024-11-17 09:28:52.713794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:47.911 [2024-11-17 09:28:52.713858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:47.911 [2024-11-17 09:28:52.713892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:47.911 [2024-11-17 09:28:52.713920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:47.911 [2024-11-17 09:28:52.714129] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:47.911 [2024-11-17 09:28:52.714182] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:47.911 [2024-11-17 09:28:52.714390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:47.911 [2024-11-17 09:28:52.714426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:47.911 [2024-11-17 09:28:52.714747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.911 [2024-11-17 09:28:52.714789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:28:47.911 [2024-11-17 09:28:52.714816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:28:47.911 [2024-11-17 09:28:52.714923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.911 [2024-11-17 09:28:52.714958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:28:47.911 [2024-11-17 09:28:52.714982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:28:47.911 [2024-11-17 09:28:52.715129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.911 [2024-11-17 09:28:52.715170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:28:47.911 [2024-11-17 09:28:52.715208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:28:47.911 [2024-11-17 09:28:52.715320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.911 [2024-11-17 09:28:52.715354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:28:47.911 [2024-11-17 09:28:52.715394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:28:47.911 [2024-11-17 09:28:52.718087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.718989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.911 [2024-11-17 09:28:52.719565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.911 [2024-11-17 09:28:52.719590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.719956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.719981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.720963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.720986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.721012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.721034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.721059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.721082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.721108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.721135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.721161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.721184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.721227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.912 [2024-11-17 09:28:52.721251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.912 [2024-11-17 09:28:52.721276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.913 [2024-11-17 09:28:52.721299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.913 [2024-11-17 09:28:52.721322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.726893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:47.913 [2024-11-17 09:28:52.726963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:47.913 [2024-11-17 09:28:52.726992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:47.913 task offset: 24576 on job bdev=Nvme3n1 fails 00:28:47.913 00:28:47.913 Latency(us) 00:28:47.913 [2024-11-17T08:28:52.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.913 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme1n1 ended in about 1.09 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme1n1 : 1.09 116.96 7.31 58.48 0.00 361183.76 30292.20 315349.52 00:28:47.913 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme2n1 ended in about 1.08 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme2n1 : 1.08 177.48 11.09 59.16 0.00 262764.66 21554.06 309135.74 00:28:47.913 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme3n1 ended in about 1.06 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme3n1 : 1.06 181.51 11.34 60.50 0.00 251889.97 35340.89 302921.96 00:28:47.913 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme4n1 ended in about 1.08 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme4n1 : 1.08 180.96 11.31 59.09 0.00 249283.82 13398.47 299815.06 00:28:47.913 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme5n1 ended in about 1.10 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme5n1 : 1.10 125.55 7.85 58.23 0.00 319857.02 15437.37 307582.29 00:28:47.913 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme6n1 ended in about 1.10 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme6n1 : 1.10 115.95 7.25 57.98 0.00 331509.00 23301.69 307582.29 00:28:47.913 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme7n1 ended in about 1.11 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme7n1 : 1.11 115.46 7.22 57.73 0.00 326616.05 26214.40 340204.66 00:28:47.913 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme8n1 ended in about 1.11 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme8n1 : 1.11 114.97 7.19 57.49 0.00 321593.14 39418.69 347971.89 00:28:47.913 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme9n1 ended in about 1.14 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme9n1 : 1.14 112.67 7.04 56.34 0.00 322568.60 23690.05 338651.21 00:28:47.913 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.913 Job: Nvme10n1 ended in about 1.13 seconds with error 00:28:47.913 Verification LBA range: start 0x0 length 0x400 00:28:47.913 Nvme10n1 : 1.13 113.58 7.10 56.79 0.00 313108.35 25243.50 316902.97 00:28:47.913 [2024-11-17T08:28:52.926Z] =================================================================================================================== 00:28:47.913 [2024-11-17T08:28:52.926Z] Total : 1355.10 84.69 581.78 0.00 301354.49 13398.47 347971.89 00:28:47.913 [2024-11-17 09:28:52.815170] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:47.913 [2024-11-17 09:28:52.815285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:47.913 [2024-11-17 09:28:52.815682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.913 [2024-11-17 09:28:52.815731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:28:47.913 [2024-11-17 09:28:52.815761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.815912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.913 [2024-11-17 09:28:52.815949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:28:47.913 [2024-11-17 09:28:52.815973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.816010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.816048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.816080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.816110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.816202] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:47.913 [2024-11-17 09:28:52.816238] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:47.913 [2024-11-17 09:28:52.816267] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:47.913 [2024-11-17 09:28:52.816296] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:47.913 [2024-11-17 09:28:52.816326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.816364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.817511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.913 [2024-11-17 09:28:52.817552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:28:47.913 [2024-11-17 09:28:52.817576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.817711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.913 [2024-11-17 09:28:52.817745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:28:47.913 [2024-11-17 09:28:52.817769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.817901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.913 [2024-11-17 09:28:52.817934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:28:47.913 [2024-11-17 09:28:52.817957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.818080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.913 [2024-11-17 09:28:52.818113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:28:47.913 [2024-11-17 09:28:52.818137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:28:47.913 [2024-11-17 09:28:52.818164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:47.913 [2024-11-17 09:28:52.818186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:47.913 [2024-11-17 09:28:52.818210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:47.913 [2024-11-17 09:28:52.818236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:47.913 [2024-11-17 09:28:52.818261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:47.913 [2024-11-17 09:28:52.818281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:47.913 [2024-11-17 09:28:52.818301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:47.913 [2024-11-17 09:28:52.818321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:47.913 [2024-11-17 09:28:52.818343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:47.913 [2024-11-17 09:28:52.818362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:47.913 [2024-11-17 09:28:52.818392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:47.913 [2024-11-17 09:28:52.818412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:47.913 [2024-11-17 09:28:52.818434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:47.913 [2024-11-17 09:28:52.818453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:47.913 [2024-11-17 09:28:52.818473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:47.913 [2024-11-17 09:28:52.818492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:47.913 [2024-11-17 09:28:52.818568] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:47.913 [2024-11-17 09:28:52.818603] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:47.913 [2024-11-17 09:28:52.819355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.819409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:28:47.913 [2024-11-17 09:28:52.819442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:28:47.914 [2024-11-17 09:28:52.819472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:28:47.914 [2024-11-17 09:28:52.819499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.819519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.819540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.819561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.819583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.819602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.819621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.819639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.819825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:47.914 [2024-11-17 09:28:52.819875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:47.914 [2024-11-17 09:28:52.819906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:47.914 [2024-11-17 09:28:52.819933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:47.914 [2024-11-17 09:28:52.820010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.820036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.820057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.820077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.820099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.820118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.820137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.820158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.820179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.820197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.820216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.820236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.820256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.820276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.820300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.820321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.820540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.914 [2024-11-17 09:28:52.820578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:28:47.914 [2024-11-17 09:28:52.820603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:28:47.914 [2024-11-17 09:28:52.820704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.914 [2024-11-17 09:28:52.820738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:28:47.914 [2024-11-17 09:28:52.820761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:28:47.914 [2024-11-17 09:28:52.820867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.914 [2024-11-17 09:28:52.820901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:28:47.914 [2024-11-17 09:28:52.820924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:28:47.914 [2024-11-17 09:28:52.821035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.914 [2024-11-17 09:28:52.821069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:28:47.914 [2024-11-17 09:28:52.821093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:28:47.914 [2024-11-17 09:28:52.821164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:28:47.914 [2024-11-17 09:28:52.821199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:28:47.914 [2024-11-17 09:28:52.821229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:28:47.914 [2024-11-17 09:28:52.821258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:28:47.914 [2024-11-17 09:28:52.821330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.821357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.821387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.821409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.821431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.821451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.821471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.821490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.821511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.821530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.821550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.821574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:47.914 [2024-11-17 09:28:52.821596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:47.914 [2024-11-17 09:28:52.821616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:47.914 [2024-11-17 09:28:52.821635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:47.914 [2024-11-17 09:28:52.821654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:50.445 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3056557 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3056557 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3056557 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:51.823 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.824 rmmod nvme_tcp 00:28:51.824 rmmod nvme_fabrics 00:28:51.824 rmmod nvme_keyring 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3056242 ']' 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3056242 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3056242 ']' 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3056242 00:28:51.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3056242) - No such process 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3056242 is not found' 00:28:51.824 Process with pid 3056242 is not found 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.824 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.728 00:28:53.728 real 0m11.538s 00:28:53.728 user 0m34.199s 00:28:53.728 sys 0m1.985s 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.728 ************************************ 00:28:53.728 END TEST nvmf_shutdown_tc3 00:28:53.728 ************************************ 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:53.728 ************************************ 00:28:53.728 START TEST nvmf_shutdown_tc4 00:28:53.728 ************************************ 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:53.728 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:53.729 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:53.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:53.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:53.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:28:53.729 00:28:53.729 --- 10.0.0.2 ping statistics --- 00:28:53.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.729 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:53.729 00:28:53.729 --- 10.0.0.1 ping statistics --- 00:28:53.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.729 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:53.729 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.730 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3057739 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3057739 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3057739 ']' 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.988 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.988 [2024-11-17 09:28:58.833796] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:53.988 [2024-11-17 09:28:58.833931] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.988 [2024-11-17 09:28:58.981900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.246 [2024-11-17 09:28:59.126177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.246 [2024-11-17 09:28:59.126266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.246 [2024-11-17 09:28:59.126293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.246 [2024-11-17 09:28:59.126317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.246 [2024-11-17 09:28:59.126347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.246 [2024-11-17 09:28:59.129267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.246 [2024-11-17 09:28:59.129358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.246 [2024-11-17 09:28:59.129423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.246 [2024-11-17 09:28:59.129428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:54.811 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.811 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:54.811 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.811 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.811 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.069 [2024-11-17 09:28:59.837833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.069 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.069 Malloc1 00:28:55.069 [2024-11-17 09:28:59.983496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.069 Malloc2 00:28:55.327 Malloc3 00:28:55.327 Malloc4 00:28:55.585 Malloc5 00:28:55.585 Malloc6 00:28:55.585 Malloc7 00:28:55.843 Malloc8 00:28:55.843 Malloc9 00:28:56.101 Malloc10 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3058046 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:56.101 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:56.101 [2024-11-17 09:29:01.017753] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3057739 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3057739 ']' 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3057739 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057739 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057739' 00:29:01.373 killing process with pid 3057739 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3057739 00:29:01.373 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3057739 00:29:01.373 [2024-11-17 09:29:05.957092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.957191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.957218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.957238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.957258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.957277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000cc80 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.960891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.960941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.373 [2024-11-17 09:29:05.960966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.960988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.961008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.961027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.961085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.962908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.969265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.969334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.969377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.969400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.969419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.969438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.973301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 [2024-11-17 09:29:05.977849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 [2024-11-17 09:29:05.978551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 [2024-11-17 09:29:05.978595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.978617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same Write completed with error (sct=0, sc=8) 00:29:01.374 with the state(6) to be set 00:29:01.374 starting I/O failed: -6 00:29:01.374 [2024-11-17 09:29:05.978638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.978665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.978684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same Write completed with error (sct=0, sc=8) 00:29:01.374 with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.978703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 [2024-11-17 09:29:05.978722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same starting I/O failed: -6 00:29:01.374 with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.978741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 [2024-11-17 09:29:05.978759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 [2024-11-17 09:29:05.978776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(6) to be set 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 Write completed with error (sct=0, sc=8) 00:29:01.374 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 [2024-11-17 09:29:05.979850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 [2024-11-17 09:29:05.980666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 [2024-11-17 09:29:05.980709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:29:01.375 [2024-11-17 09:29:05.980731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:29:01.375 [2024-11-17 09:29:05.980749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 [2024-11-17 09:29:05.980770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:29:01.375 [2024-11-17 09:29:05.980788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(6) to be set 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 [2024-11-17 09:29:05.982513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.375 starting I/O failed: -6 00:29:01.375 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 [2024-11-17 09:29:05.988270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.988314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.988337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.988365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.988403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.988423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 [2024-11-17 09:29:05.992151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.376 NVMe io qpair process completion error 00:29:01.376 [2024-11-17 09:29:05.996886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.996935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.996959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.996978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.996996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.997015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.997033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.997051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.997069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.997087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d880 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.998787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 [2024-11-17 09:29:05.999287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d080 is same with the state(6) to be set 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 Write completed with error (sct=0, sc=8) 00:29:01.376 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 [2024-11-17 09:29:06.003505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 [2024-11-17 09:29:06.005164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 [2024-11-17 09:29:06.005205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 [2024-11-17 09:29:06.005235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 [2024-11-17 09:29:06.005254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 [2024-11-17 09:29:06.005273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 [2024-11-17 09:29:06.005291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 [2024-11-17 09:29:06.005309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 [2024-11-17 09:29:06.005326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e480 is same with the state(6) to be set 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 [2024-11-17 09:29:06.006258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.377 Write completed with error (sct=0, sc=8) 00:29:01.377 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 [2024-11-17 09:29:06.007133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 starting I/O failed: -6 00:29:01.378 [2024-11-17 09:29:06.007174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 [2024-11-17 09:29:06.007211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same Write completed with error (sct=0, sc=8) 00:29:01.378 with the state(6) to be set 00:29:01.378 starting I/O failed: -6 00:29:01.378 [2024-11-17 09:29:06.007233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 [2024-11-17 09:29:06.007251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 [2024-11-17 09:29:06.007269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 starting I/O failed: -6 00:29:01.378 [2024-11-17 09:29:06.007293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 [2024-11-17 09:29:06.007311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ec80 is same with the state(6) to be set 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 [2024-11-17 09:29:06.015989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.378 NVMe io qpair process completion error 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 Write completed with error (sct=0, sc=8) 00:29:01.378 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 [2024-11-17 09:29:06.019622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.379 Write completed with error (sct=0, sc=8) 00:29:01.379 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 [2024-11-17 09:29:06.022572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 [2024-11-17 09:29:06.032243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.380 NVMe io qpair process completion error 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 [2024-11-17 09:29:06.034309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 Write completed with error (sct=0, sc=8) 00:29:01.380 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 [2024-11-17 09:29:06.036478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 [2024-11-17 09:29:06.039122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.381 starting I/O failed: -6 00:29:01.381 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 [2024-11-17 09:29:06.051597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.382 NVMe io qpair process completion error 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.382 starting I/O failed: -6 00:29:01.382 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 [2024-11-17 09:29:06.053724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 [2024-11-17 09:29:06.055599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 [2024-11-17 09:29:06.058273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.383 starting I/O failed: -6 00:29:01.383 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 [2024-11-17 09:29:06.070571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.384 NVMe io qpair process completion error 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 [2024-11-17 09:29:06.072524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.384 starting I/O failed: -6 00:29:01.384 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 [2024-11-17 09:29:06.074486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 Write completed with error (sct=0, sc=8) 00:29:01.385 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 [2024-11-17 09:29:06.077387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 [2024-11-17 09:29:06.091048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.386 NVMe io qpair process completion error 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 [2024-11-17 09:29:06.093045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.386 starting I/O failed: -6 00:29:01.386 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 [2024-11-17 09:29:06.095055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.387 starting I/O failed: -6 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 [2024-11-17 09:29:06.097898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.387 Write completed with error (sct=0, sc=8) 00:29:01.387 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 [2024-11-17 09:29:06.110062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.388 NVMe io qpair process completion error 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 Write completed with error (sct=0, sc=8) 00:29:01.388 starting I/O failed: -6 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 [2024-11-17 09:29:06.116213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.389 starting I/O failed: -6 00:29:01.389 starting I/O failed: -6 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.389 starting I/O failed: -6 00:29:01.389 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 [2024-11-17 09:29:06.128380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.390 NVMe io qpair process completion error 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 [2024-11-17 09:29:06.132013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.390 starting I/O failed: -6 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.390 Write completed with error (sct=0, sc=8) 00:29:01.390 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 [2024-11-17 09:29:06.134666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.391 starting I/O failed: -6 00:29:01.391 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 [2024-11-17 09:29:06.147135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.392 NVMe io qpair process completion error 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 [2024-11-17 09:29:06.148963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 [2024-11-17 09:29:06.150963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.392 starting I/O failed: -6 00:29:01.392 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 [2024-11-17 09:29:06.153667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.393 Write completed with error (sct=0, sc=8) 00:29:01.393 starting I/O failed: -6 00:29:01.394 Write completed with error (sct=0, sc=8) 00:29:01.394 starting I/O failed: -6 00:29:01.394 Write completed with error (sct=0, sc=8) 00:29:01.394 starting I/O failed: -6 00:29:01.394 Write completed with error (sct=0, sc=8) 00:29:01.394 starting I/O failed: -6 00:29:01.394 Write completed with error (sct=0, sc=8) 00:29:01.394 starting I/O failed: -6 00:29:01.394 Write completed with error (sct=0, sc=8) 00:29:01.394 starting I/O failed: -6 00:29:01.394 [2024-11-17 09:29:06.169341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:01.394 NVMe io qpair process completion error 00:29:01.394 Initializing NVMe Controllers 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:01.394 Controller IO queue size 128, less than required. 00:29:01.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:01.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:01.394 Initialization complete. Launching workers. 00:29:01.394 ======================================================== 00:29:01.394 Latency(us) 00:29:01.394 Device Information : IOPS MiB/s Average min max 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1488.97 63.98 85985.31 2262.75 193764.38 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1438.20 61.80 89148.73 2189.75 189165.69 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1429.98 61.44 89804.71 2186.51 182264.98 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1452.67 62.42 88592.89 1714.96 194737.04 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1410.11 60.59 91451.38 2072.04 210281.92 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1422.21 61.11 90891.40 1511.53 227893.49 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1448.57 62.24 89403.50 2007.83 245871.04 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1456.35 62.58 86223.39 1655.44 148546.24 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1506.91 64.75 85946.06 2321.02 220780.60 00:29:01.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1481.63 63.66 87610.57 1991.79 235353.77 00:29:01.394 ======================================================== 00:29:01.394 Total : 14535.58 624.58 88471.04 1511.53 245871.04 00:29:01.394 00:29:01.394 [2024-11-17 09:29:06.198320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.198471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.198556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.198641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.198759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.198852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.198934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.199035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.199121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:01.394 [2024-11-17 09:29:06.199221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:01.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:04.010 09:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3058046 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3058046 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3058046 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.947 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.947 rmmod nvme_tcp 00:29:04.947 rmmod nvme_fabrics 00:29:04.947 rmmod nvme_keyring 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3057739 ']' 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3057739 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3057739 ']' 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3057739 00:29:04.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3057739) - No such process 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3057739 is not found' 00:29:04.948 Process with pid 3057739 is not found 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.948 09:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.478 00:29:07.478 real 0m13.334s 00:29:07.478 user 0m36.782s 00:29:07.478 sys 0m5.394s 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.478 ************************************ 00:29:07.478 END TEST nvmf_shutdown_tc4 00:29:07.478 ************************************ 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:07.478 00:29:07.478 real 0m54.494s 00:29:07.478 user 2m47.111s 00:29:07.478 sys 0m13.400s 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.478 ************************************ 00:29:07.478 END TEST nvmf_shutdown 00:29:07.478 ************************************ 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.478 ************************************ 00:29:07.478 START TEST nvmf_nsid 00:29:07.478 ************************************ 00:29:07.478 09:29:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:07.478 * Looking for test storage... 00:29:07.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.478 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:07.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.478 --rc genhtml_branch_coverage=1 00:29:07.478 --rc genhtml_function_coverage=1 00:29:07.479 --rc genhtml_legend=1 00:29:07.479 --rc geninfo_all_blocks=1 00:29:07.479 --rc geninfo_unexecuted_blocks=1 00:29:07.479 00:29:07.479 ' 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.479 --rc genhtml_branch_coverage=1 00:29:07.479 --rc genhtml_function_coverage=1 00:29:07.479 --rc genhtml_legend=1 00:29:07.479 --rc geninfo_all_blocks=1 00:29:07.479 --rc geninfo_unexecuted_blocks=1 00:29:07.479 00:29:07.479 ' 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.479 --rc genhtml_branch_coverage=1 00:29:07.479 --rc genhtml_function_coverage=1 00:29:07.479 --rc genhtml_legend=1 00:29:07.479 --rc geninfo_all_blocks=1 00:29:07.479 --rc geninfo_unexecuted_blocks=1 00:29:07.479 00:29:07.479 ' 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.479 --rc genhtml_branch_coverage=1 00:29:07.479 --rc genhtml_function_coverage=1 00:29:07.479 --rc genhtml_legend=1 00:29:07.479 --rc geninfo_all_blocks=1 00:29:07.479 --rc geninfo_unexecuted_blocks=1 00:29:07.479 00:29:07.479 ' 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.479 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.510 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.413 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:29:09.414 00:29:09.414 --- 10.0.0.2 ping statistics --- 00:29:09.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.414 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:29:09.414 00:29:09.414 --- 10.0.0.1 ping statistics --- 00:29:09.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.414 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3061048 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3061048 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3061048 ']' 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.414 09:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.414 [2024-11-17 09:29:14.395738] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:09.414 [2024-11-17 09:29:14.395884] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.672 [2024-11-17 09:29:14.544633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.672 [2024-11-17 09:29:14.680057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.672 [2024-11-17 09:29:14.680151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.672 [2024-11-17 09:29:14.680177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.672 [2024-11-17 09:29:14.680213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.672 [2024-11-17 09:29:14.680233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.672 [2024-11-17 09:29:14.681846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3061196 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e44d5b22-db13-484c-b654-9383e66205af 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f119e2af-fa54-466a-ab0e-43d43cd4c739 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f79e3d97-f446-4749-aac4-997e7734324c 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.606 null0 00:29:10.606 null1 00:29:10.606 null2 00:29:10.606 [2024-11-17 09:29:15.511531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.606 [2024-11-17 09:29:15.535828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3061196 /var/tmp/tgt2.sock 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3061196 ']' 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:10.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.606 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.606 [2024-11-17 09:29:15.580741] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:10.606 [2024-11-17 09:29:15.580894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061196 ] 00:29:10.864 [2024-11-17 09:29:15.724017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.864 [2024-11-17 09:29:15.848968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.799 09:29:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.799 09:29:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:11.799 09:29:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:12.364 [2024-11-17 09:29:17.194531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.364 [2024-11-17 09:29:17.210860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:12.364 nvme0n1 nvme0n2 00:29:12.364 nvme1n1 00:29:12.364 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:12.364 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:12.365 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:12.930 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e44d5b22-db13-484c-b654-9383e66205af 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:13.863 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e44d5b22db13484cb6549383e66205af 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E44D5B22DB13484CB6549383E66205AF 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E44D5B22DB13484CB6549383E66205AF == \E\4\4\D\5\B\2\2\D\B\1\3\4\8\4\C\B\6\5\4\9\3\8\3\E\6\6\2\0\5\A\F ]] 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f119e2af-fa54-466a-ab0e-43d43cd4c739 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:14.121 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f119e2affa54466aab0e43d43cd4c739 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F119E2AFFA54466AAB0E43D43CD4C739 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F119E2AFFA54466AAB0E43D43CD4C739 == \F\1\1\9\E\2\A\F\F\A\5\4\4\6\6\A\A\B\0\E\4\3\D\4\3\C\D\4\C\7\3\9 ]] 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f79e3d97-f446-4749-aac4-997e7734324c 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:14.122 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:14.122 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f79e3d97f4464749aac4997e7734324c 00:29:14.122 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F79E3D97F4464749AAC4997E7734324C 00:29:14.122 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F79E3D97F4464749AAC4997E7734324C == \F\7\9\E\3\D\9\7\F\4\4\6\4\7\4\9\A\A\C\4\9\9\7\E\7\7\3\4\3\2\4\C ]] 00:29:14.122 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3061196 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3061196 ']' 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3061196 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061196 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061196' 00:29:14.380 killing process with pid 3061196 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3061196 00:29:14.380 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3061196 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.909 rmmod nvme_tcp 00:29:16.909 rmmod nvme_fabrics 00:29:16.909 rmmod nvme_keyring 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3061048 ']' 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3061048 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3061048 ']' 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3061048 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061048 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061048' 00:29:16.909 killing process with pid 3061048 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3061048 00:29:16.909 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3061048 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.844 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.376 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.376 00:29:20.376 real 0m12.853s 00:29:20.376 user 0m15.646s 00:29:20.376 sys 0m3.027s 00:29:20.376 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.376 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:20.376 ************************************ 00:29:20.376 END TEST nvmf_nsid 00:29:20.376 ************************************ 00:29:20.376 09:29:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:20.376 00:29:20.376 real 18m35.533s 00:29:20.376 user 51m10.488s 00:29:20.376 sys 3m32.213s 00:29:20.376 09:29:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.376 09:29:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:20.376 ************************************ 00:29:20.376 END TEST nvmf_target_extra 00:29:20.376 ************************************ 00:29:20.376 09:29:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:20.376 09:29:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.376 09:29:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.376 09:29:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.376 ************************************ 00:29:20.376 START TEST nvmf_host 00:29:20.376 ************************************ 00:29:20.376 09:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:20.376 * Looking for test storage... 00:29:20.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:20.376 09:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:20.376 09:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:20.376 09:29:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.376 --rc genhtml_branch_coverage=1 00:29:20.376 --rc genhtml_function_coverage=1 00:29:20.376 --rc genhtml_legend=1 00:29:20.376 --rc geninfo_all_blocks=1 00:29:20.376 --rc geninfo_unexecuted_blocks=1 00:29:20.376 00:29:20.376 ' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.376 --rc genhtml_branch_coverage=1 00:29:20.376 --rc genhtml_function_coverage=1 00:29:20.376 --rc genhtml_legend=1 00:29:20.376 --rc geninfo_all_blocks=1 00:29:20.376 --rc geninfo_unexecuted_blocks=1 00:29:20.376 00:29:20.376 ' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.376 --rc genhtml_branch_coverage=1 00:29:20.376 --rc genhtml_function_coverage=1 00:29:20.376 --rc genhtml_legend=1 00:29:20.376 --rc geninfo_all_blocks=1 00:29:20.376 --rc geninfo_unexecuted_blocks=1 00:29:20.376 00:29:20.376 ' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.376 --rc genhtml_branch_coverage=1 00:29:20.376 --rc genhtml_function_coverage=1 00:29:20.376 --rc genhtml_legend=1 00:29:20.376 --rc geninfo_all_blocks=1 00:29:20.376 --rc geninfo_unexecuted_blocks=1 00:29:20.376 00:29:20.376 ' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.376 09:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.377 ************************************ 00:29:20.377 START TEST nvmf_multicontroller 00:29:20.377 ************************************ 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.377 * Looking for test storage... 00:29:20.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.377 --rc genhtml_branch_coverage=1 00:29:20.377 --rc genhtml_function_coverage=1 00:29:20.377 --rc genhtml_legend=1 00:29:20.377 --rc geninfo_all_blocks=1 00:29:20.377 --rc geninfo_unexecuted_blocks=1 00:29:20.377 00:29:20.377 ' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.377 --rc genhtml_branch_coverage=1 00:29:20.377 --rc genhtml_function_coverage=1 00:29:20.377 --rc genhtml_legend=1 00:29:20.377 --rc geninfo_all_blocks=1 00:29:20.377 --rc geninfo_unexecuted_blocks=1 00:29:20.377 00:29:20.377 ' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.377 --rc genhtml_branch_coverage=1 00:29:20.377 --rc genhtml_function_coverage=1 00:29:20.377 --rc genhtml_legend=1 00:29:20.377 --rc geninfo_all_blocks=1 00:29:20.377 --rc geninfo_unexecuted_blocks=1 00:29:20.377 00:29:20.377 ' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.377 --rc genhtml_branch_coverage=1 00:29:20.377 --rc genhtml_function_coverage=1 00:29:20.377 --rc genhtml_legend=1 00:29:20.377 --rc geninfo_all_blocks=1 00:29:20.377 --rc geninfo_unexecuted_blocks=1 00:29:20.377 00:29:20.377 ' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.377 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.378 09:29:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:22.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:22.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:22.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.281 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:22.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.282 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:22.540 00:29:22.540 --- 10.0.0.2 ping statistics --- 00:29:22.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.540 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:29:22.540 00:29:22.540 --- 10.0.0.1 ping statistics --- 00:29:22.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.540 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.540 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3064038 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3064038 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3064038 ']' 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.541 09:29:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.541 [2024-11-17 09:29:27.433327] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:22.541 [2024-11-17 09:29:27.433507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.799 [2024-11-17 09:29:27.598614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.799 [2024-11-17 09:29:27.741192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.799 [2024-11-17 09:29:27.741281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.799 [2024-11-17 09:29:27.741308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.799 [2024-11-17 09:29:27.741332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.799 [2024-11-17 09:29:27.741352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.799 [2024-11-17 09:29:27.744122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.799 [2024-11-17 09:29:27.744215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.799 [2024-11-17 09:29:27.744221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.365 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 [2024-11-17 09:29:28.380555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 Malloc0 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 [2024-11-17 09:29:28.503556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 [2024-11-17 09:29:28.511461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 Malloc1 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3064194 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3064194 /var/tmp/bdevperf.sock 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3064194 ']' 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.623 09:29:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 NVMe0n1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.997 1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 request: 00:29:24.997 { 00:29:24.997 "name": "NVMe0", 00:29:24.997 "trtype": "tcp", 00:29:24.997 "traddr": "10.0.0.2", 00:29:24.997 "adrfam": "ipv4", 00:29:24.997 "trsvcid": "4420", 00:29:24.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.997 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:24.997 "hostaddr": "10.0.0.1", 00:29:24.997 "prchk_reftag": false, 00:29:24.997 "prchk_guard": false, 00:29:24.997 "hdgst": false, 00:29:24.997 "ddgst": false, 00:29:24.997 "allow_unrecognized_csi": false, 00:29:24.997 "method": "bdev_nvme_attach_controller", 00:29:24.997 "req_id": 1 00:29:24.997 } 00:29:24.997 Got JSON-RPC error response 00:29:24.997 response: 00:29:24.997 { 00:29:24.997 "code": -114, 00:29:24.997 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:24.997 } 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 request: 00:29:24.997 { 00:29:24.997 "name": "NVMe0", 00:29:24.997 "trtype": "tcp", 00:29:24.997 "traddr": "10.0.0.2", 00:29:24.997 "adrfam": "ipv4", 00:29:24.997 "trsvcid": "4420", 00:29:24.997 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:24.997 "hostaddr": "10.0.0.1", 00:29:24.997 "prchk_reftag": false, 00:29:24.997 "prchk_guard": false, 00:29:24.997 "hdgst": false, 00:29:24.997 "ddgst": false, 00:29:24.997 "allow_unrecognized_csi": false, 00:29:24.997 "method": "bdev_nvme_attach_controller", 00:29:24.997 "req_id": 1 00:29:24.997 } 00:29:24.997 Got JSON-RPC error response 00:29:24.997 response: 00:29:24.997 { 00:29:24.997 "code": -114, 00:29:24.997 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:24.997 } 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.997 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.997 request: 00:29:24.997 { 00:29:24.997 "name": "NVMe0", 00:29:24.997 "trtype": "tcp", 00:29:24.997 "traddr": "10.0.0.2", 00:29:24.997 "adrfam": "ipv4", 00:29:24.997 "trsvcid": "4420", 00:29:24.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.997 "hostaddr": "10.0.0.1", 00:29:24.997 "prchk_reftag": false, 00:29:24.997 "prchk_guard": false, 00:29:24.997 "hdgst": false, 00:29:24.997 "ddgst": false, 00:29:24.997 "multipath": "disable", 00:29:24.997 "allow_unrecognized_csi": false, 00:29:24.997 "method": "bdev_nvme_attach_controller", 00:29:24.997 "req_id": 1 00:29:24.997 } 00:29:24.997 Got JSON-RPC error response 00:29:24.997 response: 00:29:24.997 { 00:29:24.997 "code": -114, 00:29:24.998 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:24.998 } 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.998 request: 00:29:24.998 { 00:29:24.998 "name": "NVMe0", 00:29:24.998 "trtype": "tcp", 00:29:24.998 "traddr": "10.0.0.2", 00:29:24.998 "adrfam": "ipv4", 00:29:24.998 "trsvcid": "4420", 00:29:24.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.998 "hostaddr": "10.0.0.1", 00:29:24.998 "prchk_reftag": false, 00:29:24.998 "prchk_guard": false, 00:29:24.998 "hdgst": false, 00:29:24.998 "ddgst": false, 00:29:24.998 "multipath": "failover", 00:29:24.998 "allow_unrecognized_csi": false, 00:29:24.998 "method": "bdev_nvme_attach_controller", 00:29:24.998 "req_id": 1 00:29:24.998 } 00:29:24.998 Got JSON-RPC error response 00:29:24.998 response: 00:29:24.998 { 00:29:24.998 "code": -114, 00:29:24.998 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:24.998 } 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.998 NVMe0n1 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.998 09:29:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.256 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:25.256 09:29:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.191 { 00:29:26.191 "results": [ 00:29:26.191 { 00:29:26.191 "job": "NVMe0n1", 00:29:26.191 "core_mask": "0x1", 00:29:26.191 "workload": "write", 00:29:26.191 "status": "finished", 00:29:26.191 "queue_depth": 128, 00:29:26.191 "io_size": 4096, 00:29:26.191 "runtime": 1.007732, 00:29:26.191 "iops": 12856.592824282647, 00:29:26.191 "mibps": 50.22106571985409, 00:29:26.191 "io_failed": 0, 00:29:26.191 "io_timeout": 0, 00:29:26.191 "avg_latency_us": 9938.34240746458, 00:29:26.191 "min_latency_us": 3810.797037037037, 00:29:26.191 "max_latency_us": 19126.802962962964 00:29:26.191 } 00:29:26.191 ], 00:29:26.191 "core_count": 1 00:29:26.191 } 00:29:26.191 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:26.191 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.191 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3064194 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3064194 ']' 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3064194 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064194 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064194' 00:29:26.449 killing process with pid 3064194 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3064194 00:29:26.449 09:29:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3064194 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:27.383 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:27.383 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:27.383 [2024-11-17 09:29:28.707987] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:27.383 [2024-11-17 09:29:28.708147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064194 ] 00:29:27.383 [2024-11-17 09:29:28.846034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.383 [2024-11-17 09:29:28.973768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.383 [2024-11-17 09:29:30.018658] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 30583679-1230-4691-bd91-b45c87339aec already exists 00:29:27.383 [2024-11-17 09:29:30.018745] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:30583679-1230-4691-bd91-b45c87339aec alias for bdev NVMe1n1 00:29:27.383 [2024-11-17 09:29:30.018781] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:27.383 Running I/O for 1 seconds... 00:29:27.383 12828.00 IOPS, 50.11 MiB/s 00:29:27.383 Latency(us) 00:29:27.383 [2024-11-17T08:29:32.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.383 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:27.383 NVMe0n1 : 1.01 12856.59 50.22 0.00 0.00 9938.34 3810.80 19126.80 00:29:27.383 [2024-11-17T08:29:32.396Z] =================================================================================================================== 00:29:27.383 [2024-11-17T08:29:32.396Z] Total : 12856.59 50.22 0.00 0.00 9938.34 3810.80 19126.80 00:29:27.383 Received shutdown signal, test time was about 1.000000 seconds 00:29:27.383 00:29:27.383 Latency(us) 00:29:27.383 [2024-11-17T08:29:32.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.384 [2024-11-17T08:29:32.397Z] =================================================================================================================== 00:29:27.384 [2024-11-17T08:29:32.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.384 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.384 rmmod nvme_tcp 00:29:27.384 rmmod nvme_fabrics 00:29:27.384 rmmod nvme_keyring 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3064038 ']' 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3064038 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3064038 ']' 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3064038 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064038 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064038' 00:29:27.384 killing process with pid 3064038 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3064038 00:29:27.384 09:29:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3064038 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.757 09:29:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.660 00:29:30.660 real 0m10.573s 00:29:30.660 user 0m21.464s 00:29:30.660 sys 0m2.618s 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:30.660 ************************************ 00:29:30.660 END TEST nvmf_multicontroller 00:29:30.660 ************************************ 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.660 09:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.919 ************************************ 00:29:30.919 START TEST nvmf_aer 00:29:30.919 ************************************ 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:30.919 * Looking for test storage... 00:29:30.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.919 --rc genhtml_branch_coverage=1 00:29:30.919 --rc genhtml_function_coverage=1 00:29:30.919 --rc genhtml_legend=1 00:29:30.919 --rc geninfo_all_blocks=1 00:29:30.919 --rc geninfo_unexecuted_blocks=1 00:29:30.919 00:29:30.919 ' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.919 --rc genhtml_branch_coverage=1 00:29:30.919 --rc genhtml_function_coverage=1 00:29:30.919 --rc genhtml_legend=1 00:29:30.919 --rc geninfo_all_blocks=1 00:29:30.919 --rc geninfo_unexecuted_blocks=1 00:29:30.919 00:29:30.919 ' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.919 --rc genhtml_branch_coverage=1 00:29:30.919 --rc genhtml_function_coverage=1 00:29:30.919 --rc genhtml_legend=1 00:29:30.919 --rc geninfo_all_blocks=1 00:29:30.919 --rc geninfo_unexecuted_blocks=1 00:29:30.919 00:29:30.919 ' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:30.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.919 --rc genhtml_branch_coverage=1 00:29:30.919 --rc genhtml_function_coverage=1 00:29:30.919 --rc genhtml_legend=1 00:29:30.919 --rc geninfo_all_blocks=1 00:29:30.919 --rc geninfo_unexecuted_blocks=1 00:29:30.919 00:29:30.919 ' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.919 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:30.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.920 09:29:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:33.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:33.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.452 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:33.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:33.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.453 09:29:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:29:33.453 00:29:33.453 --- 10.0.0.2 ping statistics --- 00:29:33.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.453 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:29:33.453 00:29:33.453 --- 10.0.0.1 ping statistics --- 00:29:33.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.453 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3066678 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3066678 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3066678 ']' 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.453 09:29:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.453 [2024-11-17 09:29:38.205057] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:33.453 [2024-11-17 09:29:38.205203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.453 [2024-11-17 09:29:38.353784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.710 [2024-11-17 09:29:38.491816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.710 [2024-11-17 09:29:38.491900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.710 [2024-11-17 09:29:38.491926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.710 [2024-11-17 09:29:38.491950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.710 [2024-11-17 09:29:38.491969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.710 [2024-11-17 09:29:38.494798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.710 [2024-11-17 09:29:38.494870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.710 [2024-11-17 09:29:38.494974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.710 [2024-11-17 09:29:38.494979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.276 [2024-11-17 09:29:39.185994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.276 Malloc0 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.276 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.581 [2024-11-17 09:29:39.298485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.581 [ 00:29:34.581 { 00:29:34.581 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:34.581 "subtype": "Discovery", 00:29:34.581 "listen_addresses": [], 00:29:34.581 "allow_any_host": true, 00:29:34.581 "hosts": [] 00:29:34.581 }, 00:29:34.581 { 00:29:34.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.581 "subtype": "NVMe", 00:29:34.581 "listen_addresses": [ 00:29:34.581 { 00:29:34.581 "trtype": "TCP", 00:29:34.581 "adrfam": "IPv4", 00:29:34.581 "traddr": "10.0.0.2", 00:29:34.581 "trsvcid": "4420" 00:29:34.581 } 00:29:34.581 ], 00:29:34.581 "allow_any_host": true, 00:29:34.581 "hosts": [], 00:29:34.581 "serial_number": "SPDK00000000000001", 00:29:34.581 "model_number": "SPDK bdev Controller", 00:29:34.581 "max_namespaces": 2, 00:29:34.581 "min_cntlid": 1, 00:29:34.581 "max_cntlid": 65519, 00:29:34.581 "namespaces": [ 00:29:34.581 { 00:29:34.581 "nsid": 1, 00:29:34.581 "bdev_name": "Malloc0", 00:29:34.581 "name": "Malloc0", 00:29:34.581 "nguid": "B3AC5A9C75E14833BAADB96C62E86C1A", 00:29:34.581 "uuid": "b3ac5a9c-75e1-4833-baad-b96c62e86c1a" 00:29:34.581 } 00:29:34.581 ] 00:29:34.581 } 00:29:34.581 ] 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3066833 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:34.581 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.863 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.121 Malloc1 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.121 [ 00:29:35.121 { 00:29:35.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:35.121 "subtype": "Discovery", 00:29:35.121 "listen_addresses": [], 00:29:35.121 "allow_any_host": true, 00:29:35.121 "hosts": [] 00:29:35.121 }, 00:29:35.121 { 00:29:35.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.121 "subtype": "NVMe", 00:29:35.121 "listen_addresses": [ 00:29:35.121 { 00:29:35.121 "trtype": "TCP", 00:29:35.121 "adrfam": "IPv4", 00:29:35.121 "traddr": "10.0.0.2", 00:29:35.121 "trsvcid": "4420" 00:29:35.121 } 00:29:35.121 ], 00:29:35.121 "allow_any_host": true, 00:29:35.121 "hosts": [], 00:29:35.121 "serial_number": "SPDK00000000000001", 00:29:35.121 "model_number": "SPDK bdev Controller", 00:29:35.121 "max_namespaces": 2, 00:29:35.121 "min_cntlid": 1, 00:29:35.121 "max_cntlid": 65519, 00:29:35.121 "namespaces": [ 00:29:35.121 { 00:29:35.121 "nsid": 1, 00:29:35.121 "bdev_name": "Malloc0", 00:29:35.121 "name": "Malloc0", 00:29:35.121 "nguid": "B3AC5A9C75E14833BAADB96C62E86C1A", 00:29:35.121 "uuid": "b3ac5a9c-75e1-4833-baad-b96c62e86c1a" 00:29:35.121 }, 00:29:35.121 { 00:29:35.121 "nsid": 2, 00:29:35.121 "bdev_name": "Malloc1", 00:29:35.121 "name": "Malloc1", 00:29:35.121 "nguid": "319483C22AF44CF495116B62F3BF3EF6", 00:29:35.121 "uuid": "319483c2-2af4-4cf4-9511-6b62f3bf3ef6" 00:29:35.121 } 00:29:35.121 ] 00:29:35.121 } 00:29:35.121 ] 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3066833 00:29:35.121 Asynchronous Event Request test 00:29:35.121 Attaching to 10.0.0.2 00:29:35.121 Attached to 10.0.0.2 00:29:35.121 Registering asynchronous event callbacks... 00:29:35.121 Starting namespace attribute notice tests for all controllers... 00:29:35.121 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:35.121 aer_cb - Changed Namespace 00:29:35.121 Cleaning up... 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.121 09:29:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.379 rmmod nvme_tcp 00:29:35.379 rmmod nvme_fabrics 00:29:35.379 rmmod nvme_keyring 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3066678 ']' 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3066678 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3066678 ']' 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3066678 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.379 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3066678 00:29:35.637 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:35.637 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:35.637 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3066678' 00:29:35.637 killing process with pid 3066678 00:29:35.637 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3066678 00:29:35.637 09:29:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3066678 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.575 09:29:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.108 00:29:39.108 real 0m7.892s 00:29:39.108 user 0m11.980s 00:29:39.108 sys 0m2.286s 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.108 ************************************ 00:29:39.108 END TEST nvmf_aer 00:29:39.108 ************************************ 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.108 ************************************ 00:29:39.108 START TEST nvmf_async_init 00:29:39.108 ************************************ 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:39.108 * Looking for test storage... 00:29:39.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.108 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.109 --rc genhtml_branch_coverage=1 00:29:39.109 --rc genhtml_function_coverage=1 00:29:39.109 --rc genhtml_legend=1 00:29:39.109 --rc geninfo_all_blocks=1 00:29:39.109 --rc geninfo_unexecuted_blocks=1 00:29:39.109 00:29:39.109 ' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.109 --rc genhtml_branch_coverage=1 00:29:39.109 --rc genhtml_function_coverage=1 00:29:39.109 --rc genhtml_legend=1 00:29:39.109 --rc geninfo_all_blocks=1 00:29:39.109 --rc geninfo_unexecuted_blocks=1 00:29:39.109 00:29:39.109 ' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.109 --rc genhtml_branch_coverage=1 00:29:39.109 --rc genhtml_function_coverage=1 00:29:39.109 --rc genhtml_legend=1 00:29:39.109 --rc geninfo_all_blocks=1 00:29:39.109 --rc geninfo_unexecuted_blocks=1 00:29:39.109 00:29:39.109 ' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.109 --rc genhtml_branch_coverage=1 00:29:39.109 --rc genhtml_function_coverage=1 00:29:39.109 --rc genhtml_legend=1 00:29:39.109 --rc geninfo_all_blocks=1 00:29:39.109 --rc geninfo_unexecuted_blocks=1 00:29:39.109 00:29:39.109 ' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.109 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=172fafa4972049589adfeb4b9356abb0 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.110 09:29:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.012 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:29:41.013 00:29:41.013 --- 10.0.0.2 ping statistics --- 00:29:41.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.013 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:29:41.013 00:29:41.013 --- 10.0.0.1 ping statistics --- 00:29:41.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.013 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3069034 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3069034 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3069034 ']' 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.013 09:29:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:41.272 [2024-11-17 09:29:46.085830] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:41.272 [2024-11-17 09:29:46.085973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.272 [2024-11-17 09:29:46.239074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.530 [2024-11-17 09:29:46.379675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.530 [2024-11-17 09:29:46.379793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.530 [2024-11-17 09:29:46.379819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.530 [2024-11-17 09:29:46.379845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.530 [2024-11-17 09:29:46.379865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.530 [2024-11-17 09:29:46.381521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 [2024-11-17 09:29:47.059203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 null0 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 172fafa4972049589adfeb4b9356abb0 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.097 [2024-11-17 09:29:47.099549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.097 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.355 nvme0n1 00:29:42.355 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.356 [ 00:29:42.356 { 00:29:42.356 "name": "nvme0n1", 00:29:42.356 "aliases": [ 00:29:42.356 "172fafa4-9720-4958-9adf-eb4b9356abb0" 00:29:42.356 ], 00:29:42.356 "product_name": "NVMe disk", 00:29:42.356 "block_size": 512, 00:29:42.356 "num_blocks": 2097152, 00:29:42.356 "uuid": "172fafa4-9720-4958-9adf-eb4b9356abb0", 00:29:42.356 "numa_id": 0, 00:29:42.356 "assigned_rate_limits": { 00:29:42.356 "rw_ios_per_sec": 0, 00:29:42.356 "rw_mbytes_per_sec": 0, 00:29:42.356 "r_mbytes_per_sec": 0, 00:29:42.356 "w_mbytes_per_sec": 0 00:29:42.356 }, 00:29:42.356 "claimed": false, 00:29:42.356 "zoned": false, 00:29:42.356 "supported_io_types": { 00:29:42.356 "read": true, 00:29:42.356 "write": true, 00:29:42.356 "unmap": false, 00:29:42.356 "flush": true, 00:29:42.356 "reset": true, 00:29:42.356 "nvme_admin": true, 00:29:42.356 "nvme_io": true, 00:29:42.356 "nvme_io_md": false, 00:29:42.356 "write_zeroes": true, 00:29:42.356 "zcopy": false, 00:29:42.356 "get_zone_info": false, 00:29:42.356 "zone_management": false, 00:29:42.356 "zone_append": false, 00:29:42.356 "compare": true, 00:29:42.356 "compare_and_write": true, 00:29:42.356 "abort": true, 00:29:42.356 "seek_hole": false, 00:29:42.356 "seek_data": false, 00:29:42.356 "copy": true, 00:29:42.356 "nvme_iov_md": false 00:29:42.356 }, 00:29:42.356 "memory_domains": [ 00:29:42.356 { 00:29:42.356 "dma_device_id": "system", 00:29:42.356 "dma_device_type": 1 00:29:42.356 } 00:29:42.356 ], 00:29:42.356 "driver_specific": { 00:29:42.356 "nvme": [ 00:29:42.356 { 00:29:42.356 "trid": { 00:29:42.356 "trtype": "TCP", 00:29:42.356 "adrfam": "IPv4", 00:29:42.356 "traddr": "10.0.0.2", 00:29:42.356 "trsvcid": "4420", 00:29:42.356 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:42.356 }, 00:29:42.356 "ctrlr_data": { 00:29:42.356 "cntlid": 1, 00:29:42.356 "vendor_id": "0x8086", 00:29:42.356 "model_number": "SPDK bdev Controller", 00:29:42.356 "serial_number": "00000000000000000000", 00:29:42.356 "firmware_revision": "25.01", 00:29:42.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.356 "oacs": { 00:29:42.356 "security": 0, 00:29:42.356 "format": 0, 00:29:42.356 "firmware": 0, 00:29:42.356 "ns_manage": 0 00:29:42.356 }, 00:29:42.356 "multi_ctrlr": true, 00:29:42.356 "ana_reporting": false 00:29:42.356 }, 00:29:42.356 "vs": { 00:29:42.356 "nvme_version": "1.3" 00:29:42.356 }, 00:29:42.356 "ns_data": { 00:29:42.356 "id": 1, 00:29:42.356 "can_share": true 00:29:42.356 } 00:29:42.356 } 00:29:42.356 ], 00:29:42.356 "mp_policy": "active_passive" 00:29:42.356 } 00:29:42.356 } 00:29:42.356 ] 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.356 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.356 [2024-11-17 09:29:47.356273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:42.356 [2024-11-17 09:29:47.356404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:42.614 [2024-11-17 09:29:47.488633] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:42.614 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.614 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:42.614 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.614 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.614 [ 00:29:42.614 { 00:29:42.614 "name": "nvme0n1", 00:29:42.614 "aliases": [ 00:29:42.614 "172fafa4-9720-4958-9adf-eb4b9356abb0" 00:29:42.614 ], 00:29:42.614 "product_name": "NVMe disk", 00:29:42.614 "block_size": 512, 00:29:42.614 "num_blocks": 2097152, 00:29:42.614 "uuid": "172fafa4-9720-4958-9adf-eb4b9356abb0", 00:29:42.614 "numa_id": 0, 00:29:42.614 "assigned_rate_limits": { 00:29:42.614 "rw_ios_per_sec": 0, 00:29:42.614 "rw_mbytes_per_sec": 0, 00:29:42.614 "r_mbytes_per_sec": 0, 00:29:42.614 "w_mbytes_per_sec": 0 00:29:42.614 }, 00:29:42.614 "claimed": false, 00:29:42.614 "zoned": false, 00:29:42.614 "supported_io_types": { 00:29:42.614 "read": true, 00:29:42.614 "write": true, 00:29:42.614 "unmap": false, 00:29:42.614 "flush": true, 00:29:42.614 "reset": true, 00:29:42.614 "nvme_admin": true, 00:29:42.614 "nvme_io": true, 00:29:42.614 "nvme_io_md": false, 00:29:42.614 "write_zeroes": true, 00:29:42.614 "zcopy": false, 00:29:42.614 "get_zone_info": false, 00:29:42.614 "zone_management": false, 00:29:42.614 "zone_append": false, 00:29:42.614 "compare": true, 00:29:42.614 "compare_and_write": true, 00:29:42.614 "abort": true, 00:29:42.614 "seek_hole": false, 00:29:42.614 "seek_data": false, 00:29:42.614 "copy": true, 00:29:42.614 "nvme_iov_md": false 00:29:42.614 }, 00:29:42.614 "memory_domains": [ 00:29:42.614 { 00:29:42.614 "dma_device_id": "system", 00:29:42.614 "dma_device_type": 1 00:29:42.614 } 00:29:42.614 ], 00:29:42.614 "driver_specific": { 00:29:42.614 "nvme": [ 00:29:42.614 { 00:29:42.614 "trid": { 00:29:42.614 "trtype": "TCP", 00:29:42.614 "adrfam": "IPv4", 00:29:42.614 "traddr": "10.0.0.2", 00:29:42.614 "trsvcid": "4420", 00:29:42.615 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:42.615 }, 00:29:42.615 "ctrlr_data": { 00:29:42.615 "cntlid": 2, 00:29:42.615 "vendor_id": "0x8086", 00:29:42.615 "model_number": "SPDK bdev Controller", 00:29:42.615 "serial_number": "00000000000000000000", 00:29:42.615 "firmware_revision": "25.01", 00:29:42.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.615 "oacs": { 00:29:42.615 "security": 0, 00:29:42.615 "format": 0, 00:29:42.615 "firmware": 0, 00:29:42.615 "ns_manage": 0 00:29:42.615 }, 00:29:42.615 "multi_ctrlr": true, 00:29:42.615 "ana_reporting": false 00:29:42.615 }, 00:29:42.615 "vs": { 00:29:42.615 "nvme_version": "1.3" 00:29:42.615 }, 00:29:42.615 "ns_data": { 00:29:42.615 "id": 1, 00:29:42.615 "can_share": true 00:29:42.615 } 00:29:42.615 } 00:29:42.615 ], 00:29:42.615 "mp_policy": "active_passive" 00:29:42.615 } 00:29:42.615 } 00:29:42.615 ] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.78f9OIja4j 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.78f9OIja4j 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.78f9OIja4j 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.615 [2024-11-17 09:29:47.545080] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:42.615 [2024-11-17 09:29:47.545321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.615 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.615 [2024-11-17 09:29:47.561125] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:42.874 nvme0n1 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.874 [ 00:29:42.874 { 00:29:42.874 "name": "nvme0n1", 00:29:42.874 "aliases": [ 00:29:42.874 "172fafa4-9720-4958-9adf-eb4b9356abb0" 00:29:42.874 ], 00:29:42.874 "product_name": "NVMe disk", 00:29:42.874 "block_size": 512, 00:29:42.874 "num_blocks": 2097152, 00:29:42.874 "uuid": "172fafa4-9720-4958-9adf-eb4b9356abb0", 00:29:42.874 "numa_id": 0, 00:29:42.874 "assigned_rate_limits": { 00:29:42.874 "rw_ios_per_sec": 0, 00:29:42.874 "rw_mbytes_per_sec": 0, 00:29:42.874 "r_mbytes_per_sec": 0, 00:29:42.874 "w_mbytes_per_sec": 0 00:29:42.874 }, 00:29:42.874 "claimed": false, 00:29:42.874 "zoned": false, 00:29:42.874 "supported_io_types": { 00:29:42.874 "read": true, 00:29:42.874 "write": true, 00:29:42.874 "unmap": false, 00:29:42.874 "flush": true, 00:29:42.874 "reset": true, 00:29:42.874 "nvme_admin": true, 00:29:42.874 "nvme_io": true, 00:29:42.874 "nvme_io_md": false, 00:29:42.874 "write_zeroes": true, 00:29:42.874 "zcopy": false, 00:29:42.874 "get_zone_info": false, 00:29:42.874 "zone_management": false, 00:29:42.874 "zone_append": false, 00:29:42.874 "compare": true, 00:29:42.874 "compare_and_write": true, 00:29:42.874 "abort": true, 00:29:42.874 "seek_hole": false, 00:29:42.874 "seek_data": false, 00:29:42.874 "copy": true, 00:29:42.874 "nvme_iov_md": false 00:29:42.874 }, 00:29:42.874 "memory_domains": [ 00:29:42.874 { 00:29:42.874 "dma_device_id": "system", 00:29:42.874 "dma_device_type": 1 00:29:42.874 } 00:29:42.874 ], 00:29:42.874 "driver_specific": { 00:29:42.874 "nvme": [ 00:29:42.874 { 00:29:42.874 "trid": { 00:29:42.874 "trtype": "TCP", 00:29:42.874 "adrfam": "IPv4", 00:29:42.874 "traddr": "10.0.0.2", 00:29:42.874 "trsvcid": "4421", 00:29:42.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:42.874 }, 00:29:42.874 "ctrlr_data": { 00:29:42.874 "cntlid": 3, 00:29:42.874 "vendor_id": "0x8086", 00:29:42.874 "model_number": "SPDK bdev Controller", 00:29:42.874 "serial_number": "00000000000000000000", 00:29:42.874 "firmware_revision": "25.01", 00:29:42.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.874 "oacs": { 00:29:42.874 "security": 0, 00:29:42.874 "format": 0, 00:29:42.874 "firmware": 0, 00:29:42.874 "ns_manage": 0 00:29:42.874 }, 00:29:42.874 "multi_ctrlr": true, 00:29:42.874 "ana_reporting": false 00:29:42.874 }, 00:29:42.874 "vs": { 00:29:42.874 "nvme_version": "1.3" 00:29:42.874 }, 00:29:42.874 "ns_data": { 00:29:42.874 "id": 1, 00:29:42.874 "can_share": true 00:29:42.874 } 00:29:42.874 } 00:29:42.874 ], 00:29:42.874 "mp_policy": "active_passive" 00:29:42.874 } 00:29:42.874 } 00:29:42.874 ] 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.78f9OIja4j 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.874 rmmod nvme_tcp 00:29:42.874 rmmod nvme_fabrics 00:29:42.874 rmmod nvme_keyring 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3069034 ']' 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3069034 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3069034 ']' 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3069034 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069034 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069034' 00:29:42.874 killing process with pid 3069034 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3069034 00:29:42.874 09:29:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3069034 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.248 09:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.153 09:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.153 00:29:46.153 real 0m7.354s 00:29:46.153 user 0m3.955s 00:29:46.153 sys 0m2.084s 00:29:46.153 09:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.153 09:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.153 ************************************ 00:29:46.153 END TEST nvmf_async_init 00:29:46.153 ************************************ 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.153 ************************************ 00:29:46.153 START TEST dma 00:29:46.153 ************************************ 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:46.153 * Looking for test storage... 00:29:46.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.153 --rc genhtml_branch_coverage=1 00:29:46.153 --rc genhtml_function_coverage=1 00:29:46.153 --rc genhtml_legend=1 00:29:46.153 --rc geninfo_all_blocks=1 00:29:46.153 --rc geninfo_unexecuted_blocks=1 00:29:46.153 00:29:46.153 ' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.153 --rc genhtml_branch_coverage=1 00:29:46.153 --rc genhtml_function_coverage=1 00:29:46.153 --rc genhtml_legend=1 00:29:46.153 --rc geninfo_all_blocks=1 00:29:46.153 --rc geninfo_unexecuted_blocks=1 00:29:46.153 00:29:46.153 ' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.153 --rc genhtml_branch_coverage=1 00:29:46.153 --rc genhtml_function_coverage=1 00:29:46.153 --rc genhtml_legend=1 00:29:46.153 --rc geninfo_all_blocks=1 00:29:46.153 --rc geninfo_unexecuted_blocks=1 00:29:46.153 00:29:46.153 ' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.153 --rc genhtml_branch_coverage=1 00:29:46.153 --rc genhtml_function_coverage=1 00:29:46.153 --rc genhtml_legend=1 00:29:46.153 --rc geninfo_all_blocks=1 00:29:46.153 --rc geninfo_unexecuted_blocks=1 00:29:46.153 00:29:46.153 ' 00:29:46.153 09:29:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:46.413 00:29:46.413 real 0m0.149s 00:29:46.413 user 0m0.099s 00:29:46.413 sys 0m0.058s 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:46.413 ************************************ 00:29:46.413 END TEST dma 00:29:46.413 ************************************ 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.413 ************************************ 00:29:46.413 START TEST nvmf_identify 00:29:46.413 ************************************ 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:46.413 * Looking for test storage... 00:29:46.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.413 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.414 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.415 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.415 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.415 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.415 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.415 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.415 09:29:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.947 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:48.948 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:48.948 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:48.948 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:48.948 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:29:48.948 00:29:48.948 --- 10.0.0.2 ping statistics --- 00:29:48.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.948 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:48.948 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:48.948 00:29:48.948 --- 10.0.0.1 ping statistics --- 00:29:48.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.948 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3071431 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3071431 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3071431 ']' 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.949 09:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.949 [2024-11-17 09:29:53.609406] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:48.949 [2024-11-17 09:29:53.609541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.949 [2024-11-17 09:29:53.754278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.949 [2024-11-17 09:29:53.877528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.949 [2024-11-17 09:29:53.877602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.949 [2024-11-17 09:29:53.877624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.949 [2024-11-17 09:29:53.877644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.949 [2024-11-17 09:29:53.877675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.949 [2024-11-17 09:29:53.880172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.949 [2024-11-17 09:29:53.880238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.949 [2024-11-17 09:29:53.880284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.949 [2024-11-17 09:29:53.880290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 [2024-11-17 09:29:54.624023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 Malloc0 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 [2024-11-17 09:29:54.766755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.883 [ 00:29:49.883 { 00:29:49.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:49.883 "subtype": "Discovery", 00:29:49.883 "listen_addresses": [ 00:29:49.883 { 00:29:49.883 "trtype": "TCP", 00:29:49.883 "adrfam": "IPv4", 00:29:49.883 "traddr": "10.0.0.2", 00:29:49.883 "trsvcid": "4420" 00:29:49.883 } 00:29:49.883 ], 00:29:49.883 "allow_any_host": true, 00:29:49.883 "hosts": [] 00:29:49.883 }, 00:29:49.883 { 00:29:49.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.883 "subtype": "NVMe", 00:29:49.883 "listen_addresses": [ 00:29:49.883 { 00:29:49.883 "trtype": "TCP", 00:29:49.883 "adrfam": "IPv4", 00:29:49.883 "traddr": "10.0.0.2", 00:29:49.883 "trsvcid": "4420" 00:29:49.883 } 00:29:49.883 ], 00:29:49.883 "allow_any_host": true, 00:29:49.883 "hosts": [], 00:29:49.883 "serial_number": "SPDK00000000000001", 00:29:49.883 "model_number": "SPDK bdev Controller", 00:29:49.883 "max_namespaces": 32, 00:29:49.883 "min_cntlid": 1, 00:29:49.883 "max_cntlid": 65519, 00:29:49.883 "namespaces": [ 00:29:49.883 { 00:29:49.883 "nsid": 1, 00:29:49.883 "bdev_name": "Malloc0", 00:29:49.883 "name": "Malloc0", 00:29:49.883 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:49.883 "eui64": "ABCDEF0123456789", 00:29:49.883 "uuid": "a8a5f096-6e4e-4d6a-86ee-7e6fad94259d" 00:29:49.883 } 00:29:49.883 ] 00:29:49.883 } 00:29:49.883 ] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.883 09:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:49.883 [2024-11-17 09:29:54.834634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:49.883 [2024-11-17 09:29:54.834760] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071586 ] 00:29:50.144 [2024-11-17 09:29:54.916097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:50.144 [2024-11-17 09:29:54.916223] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:50.144 [2024-11-17 09:29:54.916245] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:50.144 [2024-11-17 09:29:54.916280] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:50.144 [2024-11-17 09:29:54.916304] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:50.144 [2024-11-17 09:29:54.917099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:50.144 [2024-11-17 09:29:54.917185] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:29:50.144 [2024-11-17 09:29:54.930396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:50.144 [2024-11-17 09:29:54.930434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:50.144 [2024-11-17 09:29:54.930452] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:50.144 [2024-11-17 09:29:54.930463] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:50.144 [2024-11-17 09:29:54.930535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.930555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.930569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.144 [2024-11-17 09:29:54.930606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:50.144 [2024-11-17 09:29:54.930645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.144 [2024-11-17 09:29:54.937403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.144 [2024-11-17 09:29:54.937431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.144 [2024-11-17 09:29:54.937443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.937457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.144 [2024-11-17 09:29:54.937489] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:50.144 [2024-11-17 09:29:54.937514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:50.144 [2024-11-17 09:29:54.937531] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:50.144 [2024-11-17 09:29:54.937558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.937574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.937594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.144 [2024-11-17 09:29:54.937617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.144 [2024-11-17 09:29:54.937653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.144 [2024-11-17 09:29:54.937848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.144 [2024-11-17 09:29:54.937877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.144 [2024-11-17 09:29:54.937892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.937904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.144 [2024-11-17 09:29:54.937926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:50.144 [2024-11-17 09:29:54.937951] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:50.144 [2024-11-17 09:29:54.937993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.144 [2024-11-17 09:29:54.938046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.144 [2024-11-17 09:29:54.938080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.144 [2024-11-17 09:29:54.938238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.144 [2024-11-17 09:29:54.938266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.144 [2024-11-17 09:29:54.938280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.144 [2024-11-17 09:29:54.938308] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:50.144 [2024-11-17 09:29:54.938333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:50.144 [2024-11-17 09:29:54.938355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.144 [2024-11-17 09:29:54.938427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.144 [2024-11-17 09:29:54.938461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.144 [2024-11-17 09:29:54.938564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.144 [2024-11-17 09:29:54.938586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.144 [2024-11-17 09:29:54.938603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.144 [2024-11-17 09:29:54.938633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:50.144 [2024-11-17 09:29:54.938662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.144 [2024-11-17 09:29:54.938711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.144 [2024-11-17 09:29:54.938743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.144 [2024-11-17 09:29:54.938884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.144 [2024-11-17 09:29:54.938906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.144 [2024-11-17 09:29:54.938918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.938929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.144 [2024-11-17 09:29:54.938946] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:50.144 [2024-11-17 09:29:54.938962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:50.144 [2024-11-17 09:29:54.938991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:50.144 [2024-11-17 09:29:54.939109] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:50.144 [2024-11-17 09:29:54.939123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:50.144 [2024-11-17 09:29:54.939147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.939180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.144 [2024-11-17 09:29:54.939197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.144 [2024-11-17 09:29:54.939217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.145 [2024-11-17 09:29:54.939248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.145 [2024-11-17 09:29:54.939403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.145 [2024-11-17 09:29:54.939430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.145 [2024-11-17 09:29:54.939443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.939454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.145 [2024-11-17 09:29:54.939470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:50.145 [2024-11-17 09:29:54.939503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.939520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.939532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.939552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.145 [2024-11-17 09:29:54.939584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.145 [2024-11-17 09:29:54.939719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.145 [2024-11-17 09:29:54.939739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.145 [2024-11-17 09:29:54.939766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.939778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.145 [2024-11-17 09:29:54.939792] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:50.145 [2024-11-17 09:29:54.939808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:50.145 [2024-11-17 09:29:54.939831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:50.145 [2024-11-17 09:29:54.939858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:50.145 [2024-11-17 09:29:54.939909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.939926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.939952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.145 [2024-11-17 09:29:54.939983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.145 [2024-11-17 09:29:54.940202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.145 [2024-11-17 09:29:54.940224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.145 [2024-11-17 09:29:54.940236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940248] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:29:50.145 [2024-11-17 09:29:54.940262] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.145 [2024-11-17 09:29:54.940275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940329] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.145 [2024-11-17 09:29:54.940387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.145 [2024-11-17 09:29:54.940400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.145 [2024-11-17 09:29:54.940436] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:50.145 [2024-11-17 09:29:54.940455] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:50.145 [2024-11-17 09:29:54.940468] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:50.145 [2024-11-17 09:29:54.940489] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:50.145 [2024-11-17 09:29:54.940503] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:50.145 [2024-11-17 09:29:54.940522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:50.145 [2024-11-17 09:29:54.940567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:50.145 [2024-11-17 09:29:54.940588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.940643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.145 [2024-11-17 09:29:54.940689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.145 [2024-11-17 09:29:54.940838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.145 [2024-11-17 09:29:54.940869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.145 [2024-11-17 09:29:54.940882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.145 [2024-11-17 09:29:54.940914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.940965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.145 [2024-11-17 09:29:54.940983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.940994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.941005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.941021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.145 [2024-11-17 09:29:54.941057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.941072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.941082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.941098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.145 [2024-11-17 09:29:54.941129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.941149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.941160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.941176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.145 [2024-11-17 09:29:54.941190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:50.145 [2024-11-17 09:29:54.941218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:50.145 [2024-11-17 09:29:54.941238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.941252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.941274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.145 [2024-11-17 09:29:54.941308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.145 [2024-11-17 09:29:54.941340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:29:50.145 [2024-11-17 09:29:54.941353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:29:50.145 [2024-11-17 09:29:54.941365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.145 [2024-11-17 09:29:54.945399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.145 [2024-11-17 09:29:54.945420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.145 [2024-11-17 09:29:54.945438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.145 [2024-11-17 09:29:54.945449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.945459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.145 [2024-11-17 09:29:54.945475] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:50.145 [2024-11-17 09:29:54.945497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:50.145 [2024-11-17 09:29:54.945549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.945567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.145 [2024-11-17 09:29:54.945587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.145 [2024-11-17 09:29:54.945619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.145 [2024-11-17 09:29:54.945780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.145 [2024-11-17 09:29:54.945804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.145 [2024-11-17 09:29:54.945817] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.945829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:50.145 [2024-11-17 09:29:54.945848] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.145 [2024-11-17 09:29:54.945861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.945895] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.945912] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.145 [2024-11-17 09:29:54.989400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.145 [2024-11-17 09:29:54.989434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.146 [2024-11-17 09:29:54.989448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.989461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.146 [2024-11-17 09:29:54.989500] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:50.146 [2024-11-17 09:29:54.989571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.989589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.146 [2024-11-17 09:29:54.989611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.146 [2024-11-17 09:29:54.989639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.989653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.989664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:50.146 [2024-11-17 09:29:54.989682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.146 [2024-11-17 09:29:54.989732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.146 [2024-11-17 09:29:54.989766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:50.146 [2024-11-17 09:29:54.990027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.146 [2024-11-17 09:29:54.990050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.146 [2024-11-17 09:29:54.990079] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.990091] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:29:50.146 [2024-11-17 09:29:54.990104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:29:50.146 [2024-11-17 09:29:54.990116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.990145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.990160] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.990176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.146 [2024-11-17 09:29:54.990192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.146 [2024-11-17 09:29:54.990203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:54.990215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:50.146 [2024-11-17 09:29:55.030476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.146 [2024-11-17 09:29:55.030520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.146 [2024-11-17 09:29:55.030535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.030547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.146 [2024-11-17 09:29:55.030593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.030613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.146 [2024-11-17 09:29:55.030636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.146 [2024-11-17 09:29:55.030681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.146 [2024-11-17 09:29:55.030860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.146 [2024-11-17 09:29:55.030883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.146 [2024-11-17 09:29:55.030895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.030911] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:29:50.146 [2024-11-17 09:29:55.030925] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:29:50.146 [2024-11-17 09:29:55.030936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.030966] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.030982] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.031001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.146 [2024-11-17 09:29:55.031018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.146 [2024-11-17 09:29:55.031043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.031056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.146 [2024-11-17 09:29:55.031085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.031103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.146 [2024-11-17 09:29:55.031131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.146 [2024-11-17 09:29:55.031174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.146 [2024-11-17 09:29:55.031359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.146 [2024-11-17 09:29:55.031391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.146 [2024-11-17 09:29:55.031404] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.031415] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:29:50.146 [2024-11-17 09:29:55.031427] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:29:50.146 [2024-11-17 09:29:55.031438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.031481] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.031496] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.075393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.146 [2024-11-17 09:29:55.075438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.146 [2024-11-17 09:29:55.075452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.146 [2024-11-17 09:29:55.075464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.146 ===================================================== 00:29:50.146 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:50.146 ===================================================== 00:29:50.146 Controller Capabilities/Features 00:29:50.146 ================================ 00:29:50.146 Vendor ID: 0000 00:29:50.146 Subsystem Vendor ID: 0000 00:29:50.146 Serial Number: .................... 00:29:50.146 Model Number: ........................................ 00:29:50.146 Firmware Version: 25.01 00:29:50.146 Recommended Arb Burst: 0 00:29:50.146 IEEE OUI Identifier: 00 00 00 00:29:50.146 Multi-path I/O 00:29:50.146 May have multiple subsystem ports: No 00:29:50.146 May have multiple controllers: No 00:29:50.146 Associated with SR-IOV VF: No 00:29:50.146 Max Data Transfer Size: 131072 00:29:50.146 Max Number of Namespaces: 0 00:29:50.146 Max Number of I/O Queues: 1024 00:29:50.146 NVMe Specification Version (VS): 1.3 00:29:50.146 NVMe Specification Version (Identify): 1.3 00:29:50.146 Maximum Queue Entries: 128 00:29:50.146 Contiguous Queues Required: Yes 00:29:50.146 Arbitration Mechanisms Supported 00:29:50.146 Weighted Round Robin: Not Supported 00:29:50.146 Vendor Specific: Not Supported 00:29:50.146 Reset Timeout: 15000 ms 00:29:50.146 Doorbell Stride: 4 bytes 00:29:50.146 NVM Subsystem Reset: Not Supported 00:29:50.146 Command Sets Supported 00:29:50.146 NVM Command Set: Supported 00:29:50.146 Boot Partition: Not Supported 00:29:50.146 Memory Page Size Minimum: 4096 bytes 00:29:50.146 Memory Page Size Maximum: 4096 bytes 00:29:50.146 Persistent Memory Region: Not Supported 00:29:50.146 Optional Asynchronous Events Supported 00:29:50.146 Namespace Attribute Notices: Not Supported 00:29:50.146 Firmware Activation Notices: Not Supported 00:29:50.146 ANA Change Notices: Not Supported 00:29:50.146 PLE Aggregate Log Change Notices: Not Supported 00:29:50.146 LBA Status Info Alert Notices: Not Supported 00:29:50.146 EGE Aggregate Log Change Notices: Not Supported 00:29:50.146 Normal NVM Subsystem Shutdown event: Not Supported 00:29:50.146 Zone Descriptor Change Notices: Not Supported 00:29:50.146 Discovery Log Change Notices: Supported 00:29:50.146 Controller Attributes 00:29:50.146 128-bit Host Identifier: Not Supported 00:29:50.146 Non-Operational Permissive Mode: Not Supported 00:29:50.146 NVM Sets: Not Supported 00:29:50.146 Read Recovery Levels: Not Supported 00:29:50.146 Endurance Groups: Not Supported 00:29:50.146 Predictable Latency Mode: Not Supported 00:29:50.146 Traffic Based Keep ALive: Not Supported 00:29:50.146 Namespace Granularity: Not Supported 00:29:50.146 SQ Associations: Not Supported 00:29:50.146 UUID List: Not Supported 00:29:50.146 Multi-Domain Subsystem: Not Supported 00:29:50.146 Fixed Capacity Management: Not Supported 00:29:50.146 Variable Capacity Management: Not Supported 00:29:50.146 Delete Endurance Group: Not Supported 00:29:50.146 Delete NVM Set: Not Supported 00:29:50.146 Extended LBA Formats Supported: Not Supported 00:29:50.146 Flexible Data Placement Supported: Not Supported 00:29:50.146 00:29:50.146 Controller Memory Buffer Support 00:29:50.146 ================================ 00:29:50.146 Supported: No 00:29:50.146 00:29:50.146 Persistent Memory Region Support 00:29:50.146 ================================ 00:29:50.146 Supported: No 00:29:50.146 00:29:50.146 Admin Command Set Attributes 00:29:50.146 ============================ 00:29:50.146 Security Send/Receive: Not Supported 00:29:50.146 Format NVM: Not Supported 00:29:50.146 Firmware Activate/Download: Not Supported 00:29:50.146 Namespace Management: Not Supported 00:29:50.146 Device Self-Test: Not Supported 00:29:50.147 Directives: Not Supported 00:29:50.147 NVMe-MI: Not Supported 00:29:50.147 Virtualization Management: Not Supported 00:29:50.147 Doorbell Buffer Config: Not Supported 00:29:50.147 Get LBA Status Capability: Not Supported 00:29:50.147 Command & Feature Lockdown Capability: Not Supported 00:29:50.147 Abort Command Limit: 1 00:29:50.147 Async Event Request Limit: 4 00:29:50.147 Number of Firmware Slots: N/A 00:29:50.147 Firmware Slot 1 Read-Only: N/A 00:29:50.147 Firmware Activation Without Reset: N/A 00:29:50.147 Multiple Update Detection Support: N/A 00:29:50.147 Firmware Update Granularity: No Information Provided 00:29:50.147 Per-Namespace SMART Log: No 00:29:50.147 Asymmetric Namespace Access Log Page: Not Supported 00:29:50.147 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:50.147 Command Effects Log Page: Not Supported 00:29:50.147 Get Log Page Extended Data: Supported 00:29:50.147 Telemetry Log Pages: Not Supported 00:29:50.147 Persistent Event Log Pages: Not Supported 00:29:50.147 Supported Log Pages Log Page: May Support 00:29:50.147 Commands Supported & Effects Log Page: Not Supported 00:29:50.147 Feature Identifiers & Effects Log Page:May Support 00:29:50.147 NVMe-MI Commands & Effects Log Page: May Support 00:29:50.147 Data Area 4 for Telemetry Log: Not Supported 00:29:50.147 Error Log Page Entries Supported: 128 00:29:50.147 Keep Alive: Not Supported 00:29:50.147 00:29:50.147 NVM Command Set Attributes 00:29:50.147 ========================== 00:29:50.147 Submission Queue Entry Size 00:29:50.147 Max: 1 00:29:50.147 Min: 1 00:29:50.147 Completion Queue Entry Size 00:29:50.147 Max: 1 00:29:50.147 Min: 1 00:29:50.147 Number of Namespaces: 0 00:29:50.147 Compare Command: Not Supported 00:29:50.147 Write Uncorrectable Command: Not Supported 00:29:50.147 Dataset Management Command: Not Supported 00:29:50.147 Write Zeroes Command: Not Supported 00:29:50.147 Set Features Save Field: Not Supported 00:29:50.147 Reservations: Not Supported 00:29:50.147 Timestamp: Not Supported 00:29:50.147 Copy: Not Supported 00:29:50.147 Volatile Write Cache: Not Present 00:29:50.147 Atomic Write Unit (Normal): 1 00:29:50.147 Atomic Write Unit (PFail): 1 00:29:50.147 Atomic Compare & Write Unit: 1 00:29:50.147 Fused Compare & Write: Supported 00:29:50.147 Scatter-Gather List 00:29:50.147 SGL Command Set: Supported 00:29:50.147 SGL Keyed: Supported 00:29:50.147 SGL Bit Bucket Descriptor: Not Supported 00:29:50.147 SGL Metadata Pointer: Not Supported 00:29:50.147 Oversized SGL: Not Supported 00:29:50.147 SGL Metadata Address: Not Supported 00:29:50.147 SGL Offset: Supported 00:29:50.147 Transport SGL Data Block: Not Supported 00:29:50.147 Replay Protected Memory Block: Not Supported 00:29:50.147 00:29:50.147 Firmware Slot Information 00:29:50.147 ========================= 00:29:50.147 Active slot: 0 00:29:50.147 00:29:50.147 00:29:50.147 Error Log 00:29:50.147 ========= 00:29:50.147 00:29:50.147 Active Namespaces 00:29:50.147 ================= 00:29:50.147 Discovery Log Page 00:29:50.147 ================== 00:29:50.147 Generation Counter: 2 00:29:50.147 Number of Records: 2 00:29:50.147 Record Format: 0 00:29:50.147 00:29:50.147 Discovery Log Entry 0 00:29:50.147 ---------------------- 00:29:50.147 Transport Type: 3 (TCP) 00:29:50.147 Address Family: 1 (IPv4) 00:29:50.147 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:50.147 Entry Flags: 00:29:50.147 Duplicate Returned Information: 1 00:29:50.147 Explicit Persistent Connection Support for Discovery: 1 00:29:50.147 Transport Requirements: 00:29:50.147 Secure Channel: Not Required 00:29:50.147 Port ID: 0 (0x0000) 00:29:50.147 Controller ID: 65535 (0xffff) 00:29:50.147 Admin Max SQ Size: 128 00:29:50.147 Transport Service Identifier: 4420 00:29:50.147 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:50.147 Transport Address: 10.0.0.2 00:29:50.147 Discovery Log Entry 1 00:29:50.147 ---------------------- 00:29:50.147 Transport Type: 3 (TCP) 00:29:50.147 Address Family: 1 (IPv4) 00:29:50.147 Subsystem Type: 2 (NVM Subsystem) 00:29:50.147 Entry Flags: 00:29:50.147 Duplicate Returned Information: 0 00:29:50.147 Explicit Persistent Connection Support for Discovery: 0 00:29:50.147 Transport Requirements: 00:29:50.147 Secure Channel: Not Required 00:29:50.147 Port ID: 0 (0x0000) 00:29:50.147 Controller ID: 65535 (0xffff) 00:29:50.147 Admin Max SQ Size: 128 00:29:50.147 Transport Service Identifier: 4420 00:29:50.147 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:50.147 Transport Address: 10.0.0.2 [2024-11-17 09:29:55.075658] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:50.147 [2024-11-17 09:29:55.075691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.075728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.147 [2024-11-17 09:29:55.075744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.075757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.147 [2024-11-17 09:29:55.075770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.075783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.147 [2024-11-17 09:29:55.075795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.075808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.147 [2024-11-17 09:29:55.075833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.075848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.075859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.147 [2024-11-17 09:29:55.075878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.147 [2024-11-17 09:29:55.075929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.147 [2024-11-17 09:29:55.076052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.147 [2024-11-17 09:29:55.076074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.147 [2024-11-17 09:29:55.076087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.076121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.147 [2024-11-17 09:29:55.076167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.147 [2024-11-17 09:29:55.076208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.147 [2024-11-17 09:29:55.076397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.147 [2024-11-17 09:29:55.076421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.147 [2024-11-17 09:29:55.076434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.076467] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:50.147 [2024-11-17 09:29:55.076486] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:50.147 [2024-11-17 09:29:55.076514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.147 [2024-11-17 09:29:55.076562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.147 [2024-11-17 09:29:55.076594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.147 [2024-11-17 09:29:55.076715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.147 [2024-11-17 09:29:55.076737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.147 [2024-11-17 09:29:55.076763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.147 [2024-11-17 09:29:55.076804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.147 [2024-11-17 09:29:55.076831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.147 [2024-11-17 09:29:55.076849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.147 [2024-11-17 09:29:55.076880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.147 [2024-11-17 09:29:55.076991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.148 [2024-11-17 09:29:55.077012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.148 [2024-11-17 09:29:55.077024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.077035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.148 [2024-11-17 09:29:55.077062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.077078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.077089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.148 [2024-11-17 09:29:55.077107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.148 [2024-11-17 09:29:55.077138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.148 [2024-11-17 09:29:55.077247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.148 [2024-11-17 09:29:55.077269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.148 [2024-11-17 09:29:55.077281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.077292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.148 [2024-11-17 09:29:55.077319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.077335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.077346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.148 [2024-11-17 09:29:55.077364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.148 [2024-11-17 09:29:55.077409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.148 [2024-11-17 09:29:55.081399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.148 [2024-11-17 09:29:55.081431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.148 [2024-11-17 09:29:55.081445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.081457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.148 [2024-11-17 09:29:55.081484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.081500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.081510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.148 [2024-11-17 09:29:55.081529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.148 [2024-11-17 09:29:55.081560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.148 [2024-11-17 09:29:55.081683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.148 [2024-11-17 09:29:55.081703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.148 [2024-11-17 09:29:55.081714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.148 [2024-11-17 09:29:55.081725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.148 [2024-11-17 09:29:55.081748] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:50.148 00:29:50.148 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:50.409 [2024-11-17 09:29:55.188945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:50.409 [2024-11-17 09:29:55.189039] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071596 ] 00:29:50.409 [2024-11-17 09:29:55.267131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:50.409 [2024-11-17 09:29:55.267253] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:50.409 [2024-11-17 09:29:55.267275] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:50.409 [2024-11-17 09:29:55.267310] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:50.409 [2024-11-17 09:29:55.267335] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:50.409 [2024-11-17 09:29:55.268090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:50.409 [2024-11-17 09:29:55.268165] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:29:50.409 [2024-11-17 09:29:55.282391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:50.409 [2024-11-17 09:29:55.282428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:50.409 [2024-11-17 09:29:55.282444] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:50.409 [2024-11-17 09:29:55.282455] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:50.409 [2024-11-17 09:29:55.282525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.282545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.282564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.409 [2024-11-17 09:29:55.282594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:50.409 [2024-11-17 09:29:55.282633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.409 [2024-11-17 09:29:55.290395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.409 [2024-11-17 09:29:55.290422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.409 [2024-11-17 09:29:55.290435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.290447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.409 [2024-11-17 09:29:55.290478] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:50.409 [2024-11-17 09:29:55.290501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:50.409 [2024-11-17 09:29:55.290517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:50.409 [2024-11-17 09:29:55.290547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.290562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.290581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.409 [2024-11-17 09:29:55.290601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.409 [2024-11-17 09:29:55.290635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.409 [2024-11-17 09:29:55.290804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.409 [2024-11-17 09:29:55.290826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.409 [2024-11-17 09:29:55.290839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.290852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.409 [2024-11-17 09:29:55.290878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:50.409 [2024-11-17 09:29:55.290904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:50.409 [2024-11-17 09:29:55.290930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.290960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.290972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.409 [2024-11-17 09:29:55.290995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.409 [2024-11-17 09:29:55.291033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.409 [2024-11-17 09:29:55.291199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.409 [2024-11-17 09:29:55.291221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.409 [2024-11-17 09:29:55.291237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.291251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.409 [2024-11-17 09:29:55.291270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:50.409 [2024-11-17 09:29:55.291295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:50.409 [2024-11-17 09:29:55.291316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.409 [2024-11-17 09:29:55.291330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.291357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.291392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.410 [2024-11-17 09:29:55.291425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.410 [2024-11-17 09:29:55.291591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.410 [2024-11-17 09:29:55.291613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.410 [2024-11-17 09:29:55.291625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.291636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.410 [2024-11-17 09:29:55.291652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:50.410 [2024-11-17 09:29:55.291680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.291696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.291713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.291749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.410 [2024-11-17 09:29:55.291782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.410 [2024-11-17 09:29:55.291940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.410 [2024-11-17 09:29:55.291963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.410 [2024-11-17 09:29:55.291980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.291993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.410 [2024-11-17 09:29:55.292008] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:50.410 [2024-11-17 09:29:55.292027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:50.410 [2024-11-17 09:29:55.292051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:50.410 [2024-11-17 09:29:55.292175] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:50.410 [2024-11-17 09:29:55.292206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:50.410 [2024-11-17 09:29:55.292229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.292243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.292271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.292290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.410 [2024-11-17 09:29:55.292321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.410 [2024-11-17 09:29:55.292484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.410 [2024-11-17 09:29:55.292507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.410 [2024-11-17 09:29:55.292519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.292530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.410 [2024-11-17 09:29:55.292545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:50.410 [2024-11-17 09:29:55.292583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.292600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.292617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.292637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.410 [2024-11-17 09:29:55.292669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.410 [2024-11-17 09:29:55.292834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.410 [2024-11-17 09:29:55.292856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.410 [2024-11-17 09:29:55.292868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.292880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.410 [2024-11-17 09:29:55.292894] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:50.410 [2024-11-17 09:29:55.292908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:50.410 [2024-11-17 09:29:55.292930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:50.410 [2024-11-17 09:29:55.292972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:50.410 [2024-11-17 09:29:55.293004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.293019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.293043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.410 [2024-11-17 09:29:55.293076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.410 [2024-11-17 09:29:55.293318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.410 [2024-11-17 09:29:55.293341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.410 [2024-11-17 09:29:55.293354] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.293383] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:29:50.410 [2024-11-17 09:29:55.293399] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.410 [2024-11-17 09:29:55.293413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.293443] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.293460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.410 [2024-11-17 09:29:55.336438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.410 [2024-11-17 09:29:55.336451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.410 [2024-11-17 09:29:55.336488] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:50.410 [2024-11-17 09:29:55.336505] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:50.410 [2024-11-17 09:29:55.336517] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:50.410 [2024-11-17 09:29:55.336529] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:50.410 [2024-11-17 09:29:55.336542] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:50.410 [2024-11-17 09:29:55.336564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:50.410 [2024-11-17 09:29:55.336592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:50.410 [2024-11-17 09:29:55.336622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.336699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.410 [2024-11-17 09:29:55.336757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.410 [2024-11-17 09:29:55.336900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.410 [2024-11-17 09:29:55.336922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.410 [2024-11-17 09:29:55.336935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.410 [2024-11-17 09:29:55.336967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.336994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.337023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.410 [2024-11-17 09:29:55.337057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.337071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.410 [2024-11-17 09:29:55.337081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:29:50.410 [2024-11-17 09:29:55.337102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.411 [2024-11-17 09:29:55.337119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:29:50.411 [2024-11-17 09:29:55.337172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.411 [2024-11-17 09:29:55.337188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.411 [2024-11-17 09:29:55.337225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.411 [2024-11-17 09:29:55.337240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.337267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.337287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.411 [2024-11-17 09:29:55.337319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.411 [2024-11-17 09:29:55.337374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:50.411 [2024-11-17 09:29:55.337394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:29:50.411 [2024-11-17 09:29:55.337407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:29:50.411 [2024-11-17 09:29:55.337434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.411 [2024-11-17 09:29:55.337446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.411 [2024-11-17 09:29:55.337605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.411 [2024-11-17 09:29:55.337627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.411 [2024-11-17 09:29:55.337640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.411 [2024-11-17 09:29:55.337674] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:50.411 [2024-11-17 09:29:55.337690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.337728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.337747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.337771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.337796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.411 [2024-11-17 09:29:55.337816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:50.411 [2024-11-17 09:29:55.337852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.411 [2024-11-17 09:29:55.338020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.411 [2024-11-17 09:29:55.338041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.411 [2024-11-17 09:29:55.338054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.411 [2024-11-17 09:29:55.338158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.338200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.338243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.411 [2024-11-17 09:29:55.338278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.411 [2024-11-17 09:29:55.338331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.411 [2024-11-17 09:29:55.338524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.411 [2024-11-17 09:29:55.338548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.411 [2024-11-17 09:29:55.338560] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338571] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:50.411 [2024-11-17 09:29:55.338584] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.411 [2024-11-17 09:29:55.338595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338619] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338634] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.411 [2024-11-17 09:29:55.338690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.411 [2024-11-17 09:29:55.338702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.411 [2024-11-17 09:29:55.338767] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:50.411 [2024-11-17 09:29:55.338818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.338873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.338899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.338914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.411 [2024-11-17 09:29:55.338932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.411 [2024-11-17 09:29:55.338963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.411 [2024-11-17 09:29:55.339177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.411 [2024-11-17 09:29:55.339200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.411 [2024-11-17 09:29:55.339213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.339223] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:50.411 [2024-11-17 09:29:55.339241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.411 [2024-11-17 09:29:55.339253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.339271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.339285] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.339320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.411 [2024-11-17 09:29:55.339340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.411 [2024-11-17 09:29:55.339362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.411 [2024-11-17 09:29:55.339384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.411 [2024-11-17 09:29:55.339426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:50.411 [2024-11-17 09:29:55.339458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.339486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.339506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.339526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.339559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.412 [2024-11-17 09:29:55.339708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.412 [2024-11-17 09:29:55.339732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.412 [2024-11-17 09:29:55.339756] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.339767] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:29:50.412 [2024-11-17 09:29:55.339780] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.412 [2024-11-17 09:29:55.339791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.339809] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.339823] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.339842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.412 [2024-11-17 09:29:55.339876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.412 [2024-11-17 09:29:55.339888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.339899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.412 [2024-11-17 09:29:55.339928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.339953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.339977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.339994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.340009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.340039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.340057] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:50.412 [2024-11-17 09:29:55.340070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:50.412 [2024-11-17 09:29:55.340084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:50.412 [2024-11-17 09:29:55.340131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.340150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.340169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.340193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.340207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.340218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.340234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:50.412 [2024-11-17 09:29:55.340266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.412 [2024-11-17 09:29:55.340300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:50.412 [2024-11-17 09:29:55.344404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.412 [2024-11-17 09:29:55.344430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.412 [2024-11-17 09:29:55.344443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.344455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.412 [2024-11-17 09:29:55.344483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.412 [2024-11-17 09:29:55.344500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.412 [2024-11-17 09:29:55.344511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.344522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:50.412 [2024-11-17 09:29:55.344548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.344564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.344583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.344615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:50.412 [2024-11-17 09:29:55.344826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.412 [2024-11-17 09:29:55.344849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.412 [2024-11-17 09:29:55.344861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.344872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:50.412 [2024-11-17 09:29:55.344899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.344930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.344954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.344986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:50.412 [2024-11-17 09:29:55.345144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.412 [2024-11-17 09:29:55.345170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.412 [2024-11-17 09:29:55.345184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:50.412 [2024-11-17 09:29:55.345221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.345256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.345287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:50.412 [2024-11-17 09:29:55.345444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.412 [2024-11-17 09:29:55.345465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.412 [2024-11-17 09:29:55.345477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:50.412 [2024-11-17 09:29:55.345531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.345571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.345593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.345627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.345649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.345702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.345732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.412 [2024-11-17 09:29:55.345763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:29:50.412 [2024-11-17 09:29:55.345786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.412 [2024-11-17 09:29:55.345818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:29:50.412 [2024-11-17 09:29:55.345852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:29:50.412 [2024-11-17 09:29:55.345865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:29:50.412 [2024-11-17 09:29:55.345876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:29:50.412 [2024-11-17 09:29:55.346169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.413 [2024-11-17 09:29:55.346208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.413 [2024-11-17 09:29:55.346222] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346233] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:29:50.413 [2024-11-17 09:29:55.346246] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:29:50.413 [2024-11-17 09:29:55.346278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346313] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346331] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.413 [2024-11-17 09:29:55.346395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.413 [2024-11-17 09:29:55.346407] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346418] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:29:50.413 [2024-11-17 09:29:55.346431] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:29:50.413 [2024-11-17 09:29:55.346442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346469] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346483] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.413 [2024-11-17 09:29:55.346513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.413 [2024-11-17 09:29:55.346525] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346536] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:29:50.413 [2024-11-17 09:29:55.346548] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:29:50.413 [2024-11-17 09:29:55.346559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346575] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:50.413 [2024-11-17 09:29:55.346618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:50.413 [2024-11-17 09:29:55.346630] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:29:50.413 [2024-11-17 09:29:55.346678] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:29:50.413 [2024-11-17 09:29:55.346689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346706] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346718] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.413 [2024-11-17 09:29:55.346768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.413 [2024-11-17 09:29:55.346779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:29:50.413 [2024-11-17 09:29:55.346831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.413 [2024-11-17 09:29:55.346849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.413 [2024-11-17 09:29:55.346860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:29:50.413 [2024-11-17 09:29:55.346897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.413 [2024-11-17 09:29:55.346914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.413 [2024-11-17 09:29:55.346925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.346936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:29:50.413 [2024-11-17 09:29:55.346962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.413 [2024-11-17 09:29:55.346980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.413 [2024-11-17 09:29:55.346991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.413 [2024-11-17 09:29:55.347001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:29:50.413 ===================================================== 00:29:50.413 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.413 ===================================================== 00:29:50.413 Controller Capabilities/Features 00:29:50.413 ================================ 00:29:50.413 Vendor ID: 8086 00:29:50.413 Subsystem Vendor ID: 8086 00:29:50.413 Serial Number: SPDK00000000000001 00:29:50.413 Model Number: SPDK bdev Controller 00:29:50.413 Firmware Version: 25.01 00:29:50.413 Recommended Arb Burst: 6 00:29:50.413 IEEE OUI Identifier: e4 d2 5c 00:29:50.413 Multi-path I/O 00:29:50.413 May have multiple subsystem ports: Yes 00:29:50.413 May have multiple controllers: Yes 00:29:50.413 Associated with SR-IOV VF: No 00:29:50.413 Max Data Transfer Size: 131072 00:29:50.413 Max Number of Namespaces: 32 00:29:50.413 Max Number of I/O Queues: 127 00:29:50.413 NVMe Specification Version (VS): 1.3 00:29:50.413 NVMe Specification Version (Identify): 1.3 00:29:50.413 Maximum Queue Entries: 128 00:29:50.413 Contiguous Queues Required: Yes 00:29:50.413 Arbitration Mechanisms Supported 00:29:50.413 Weighted Round Robin: Not Supported 00:29:50.413 Vendor Specific: Not Supported 00:29:50.413 Reset Timeout: 15000 ms 00:29:50.413 Doorbell Stride: 4 bytes 00:29:50.413 NVM Subsystem Reset: Not Supported 00:29:50.413 Command Sets Supported 00:29:50.413 NVM Command Set: Supported 00:29:50.413 Boot Partition: Not Supported 00:29:50.413 Memory Page Size Minimum: 4096 bytes 00:29:50.413 Memory Page Size Maximum: 4096 bytes 00:29:50.413 Persistent Memory Region: Not Supported 00:29:50.413 Optional Asynchronous Events Supported 00:29:50.413 Namespace Attribute Notices: Supported 00:29:50.413 Firmware Activation Notices: Not Supported 00:29:50.413 ANA Change Notices: Not Supported 00:29:50.413 PLE Aggregate Log Change Notices: Not Supported 00:29:50.413 LBA Status Info Alert Notices: Not Supported 00:29:50.413 EGE Aggregate Log Change Notices: Not Supported 00:29:50.413 Normal NVM Subsystem Shutdown event: Not Supported 00:29:50.413 Zone Descriptor Change Notices: Not Supported 00:29:50.413 Discovery Log Change Notices: Not Supported 00:29:50.413 Controller Attributes 00:29:50.413 128-bit Host Identifier: Supported 00:29:50.413 Non-Operational Permissive Mode: Not Supported 00:29:50.413 NVM Sets: Not Supported 00:29:50.413 Read Recovery Levels: Not Supported 00:29:50.413 Endurance Groups: Not Supported 00:29:50.413 Predictable Latency Mode: Not Supported 00:29:50.413 Traffic Based Keep ALive: Not Supported 00:29:50.413 Namespace Granularity: Not Supported 00:29:50.413 SQ Associations: Not Supported 00:29:50.413 UUID List: Not Supported 00:29:50.413 Multi-Domain Subsystem: Not Supported 00:29:50.413 Fixed Capacity Management: Not Supported 00:29:50.413 Variable Capacity Management: Not Supported 00:29:50.413 Delete Endurance Group: Not Supported 00:29:50.413 Delete NVM Set: Not Supported 00:29:50.413 Extended LBA Formats Supported: Not Supported 00:29:50.413 Flexible Data Placement Supported: Not Supported 00:29:50.413 00:29:50.413 Controller Memory Buffer Support 00:29:50.413 ================================ 00:29:50.413 Supported: No 00:29:50.413 00:29:50.413 Persistent Memory Region Support 00:29:50.413 ================================ 00:29:50.413 Supported: No 00:29:50.413 00:29:50.413 Admin Command Set Attributes 00:29:50.413 ============================ 00:29:50.413 Security Send/Receive: Not Supported 00:29:50.413 Format NVM: Not Supported 00:29:50.413 Firmware Activate/Download: Not Supported 00:29:50.414 Namespace Management: Not Supported 00:29:50.414 Device Self-Test: Not Supported 00:29:50.414 Directives: Not Supported 00:29:50.414 NVMe-MI: Not Supported 00:29:50.414 Virtualization Management: Not Supported 00:29:50.414 Doorbell Buffer Config: Not Supported 00:29:50.414 Get LBA Status Capability: Not Supported 00:29:50.414 Command & Feature Lockdown Capability: Not Supported 00:29:50.414 Abort Command Limit: 4 00:29:50.414 Async Event Request Limit: 4 00:29:50.414 Number of Firmware Slots: N/A 00:29:50.414 Firmware Slot 1 Read-Only: N/A 00:29:50.414 Firmware Activation Without Reset: N/A 00:29:50.414 Multiple Update Detection Support: N/A 00:29:50.414 Firmware Update Granularity: No Information Provided 00:29:50.414 Per-Namespace SMART Log: No 00:29:50.414 Asymmetric Namespace Access Log Page: Not Supported 00:29:50.414 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:50.414 Command Effects Log Page: Supported 00:29:50.414 Get Log Page Extended Data: Supported 00:29:50.414 Telemetry Log Pages: Not Supported 00:29:50.414 Persistent Event Log Pages: Not Supported 00:29:50.414 Supported Log Pages Log Page: May Support 00:29:50.414 Commands Supported & Effects Log Page: Not Supported 00:29:50.414 Feature Identifiers & Effects Log Page:May Support 00:29:50.414 NVMe-MI Commands & Effects Log Page: May Support 00:29:50.414 Data Area 4 for Telemetry Log: Not Supported 00:29:50.414 Error Log Page Entries Supported: 128 00:29:50.414 Keep Alive: Supported 00:29:50.414 Keep Alive Granularity: 10000 ms 00:29:50.414 00:29:50.414 NVM Command Set Attributes 00:29:50.414 ========================== 00:29:50.414 Submission Queue Entry Size 00:29:50.414 Max: 64 00:29:50.414 Min: 64 00:29:50.414 Completion Queue Entry Size 00:29:50.414 Max: 16 00:29:50.414 Min: 16 00:29:50.414 Number of Namespaces: 32 00:29:50.414 Compare Command: Supported 00:29:50.414 Write Uncorrectable Command: Not Supported 00:29:50.414 Dataset Management Command: Supported 00:29:50.414 Write Zeroes Command: Supported 00:29:50.414 Set Features Save Field: Not Supported 00:29:50.414 Reservations: Supported 00:29:50.414 Timestamp: Not Supported 00:29:50.414 Copy: Supported 00:29:50.414 Volatile Write Cache: Present 00:29:50.414 Atomic Write Unit (Normal): 1 00:29:50.414 Atomic Write Unit (PFail): 1 00:29:50.414 Atomic Compare & Write Unit: 1 00:29:50.414 Fused Compare & Write: Supported 00:29:50.414 Scatter-Gather List 00:29:50.414 SGL Command Set: Supported 00:29:50.414 SGL Keyed: Supported 00:29:50.414 SGL Bit Bucket Descriptor: Not Supported 00:29:50.414 SGL Metadata Pointer: Not Supported 00:29:50.414 Oversized SGL: Not Supported 00:29:50.414 SGL Metadata Address: Not Supported 00:29:50.414 SGL Offset: Supported 00:29:50.414 Transport SGL Data Block: Not Supported 00:29:50.414 Replay Protected Memory Block: Not Supported 00:29:50.414 00:29:50.414 Firmware Slot Information 00:29:50.414 ========================= 00:29:50.414 Active slot: 1 00:29:50.414 Slot 1 Firmware Revision: 25.01 00:29:50.414 00:29:50.414 00:29:50.414 Commands Supported and Effects 00:29:50.414 ============================== 00:29:50.414 Admin Commands 00:29:50.414 -------------- 00:29:50.414 Get Log Page (02h): Supported 00:29:50.414 Identify (06h): Supported 00:29:50.414 Abort (08h): Supported 00:29:50.414 Set Features (09h): Supported 00:29:50.414 Get Features (0Ah): Supported 00:29:50.414 Asynchronous Event Request (0Ch): Supported 00:29:50.414 Keep Alive (18h): Supported 00:29:50.414 I/O Commands 00:29:50.414 ------------ 00:29:50.414 Flush (00h): Supported LBA-Change 00:29:50.414 Write (01h): Supported LBA-Change 00:29:50.414 Read (02h): Supported 00:29:50.414 Compare (05h): Supported 00:29:50.414 Write Zeroes (08h): Supported LBA-Change 00:29:50.414 Dataset Management (09h): Supported LBA-Change 00:29:50.414 Copy (19h): Supported LBA-Change 00:29:50.414 00:29:50.414 Error Log 00:29:50.414 ========= 00:29:50.414 00:29:50.414 Arbitration 00:29:50.414 =========== 00:29:50.414 Arbitration Burst: 1 00:29:50.414 00:29:50.414 Power Management 00:29:50.414 ================ 00:29:50.414 Number of Power States: 1 00:29:50.414 Current Power State: Power State #0 00:29:50.414 Power State #0: 00:29:50.414 Max Power: 0.00 W 00:29:50.414 Non-Operational State: Operational 00:29:50.414 Entry Latency: Not Reported 00:29:50.414 Exit Latency: Not Reported 00:29:50.414 Relative Read Throughput: 0 00:29:50.414 Relative Read Latency: 0 00:29:50.414 Relative Write Throughput: 0 00:29:50.414 Relative Write Latency: 0 00:29:50.414 Idle Power: Not Reported 00:29:50.414 Active Power: Not Reported 00:29:50.414 Non-Operational Permissive Mode: Not Supported 00:29:50.414 00:29:50.414 Health Information 00:29:50.414 ================== 00:29:50.414 Critical Warnings: 00:29:50.414 Available Spare Space: OK 00:29:50.414 Temperature: OK 00:29:50.414 Device Reliability: OK 00:29:50.414 Read Only: No 00:29:50.414 Volatile Memory Backup: OK 00:29:50.414 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:50.414 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:50.414 Available Spare: 0% 00:29:50.414 Available Spare Threshold: 0% 00:29:50.414 Life Percentage Used:[2024-11-17 09:29:55.347195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.414 [2024-11-17 09:29:55.347213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:29:50.414 [2024-11-17 09:29:55.347233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.414 [2024-11-17 09:29:55.347264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:29:50.414 [2024-11-17 09:29:55.347481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.414 [2024-11-17 09:29:55.347505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.414 [2024-11-17 09:29:55.347518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.414 [2024-11-17 09:29:55.347530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:29:50.414 [2024-11-17 09:29:55.347607] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:50.414 [2024-11-17 09:29:55.347637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:29:50.414 [2024-11-17 09:29:55.347671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.414 [2024-11-17 09:29:55.347687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:29:50.414 [2024-11-17 09:29:55.347717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.415 [2024-11-17 09:29:55.347730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:29:50.415 [2024-11-17 09:29:55.347744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.415 [2024-11-17 09:29:55.347756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.415 [2024-11-17 09:29:55.347785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:50.415 [2024-11-17 09:29:55.347806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.347820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.347831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.415 [2024-11-17 09:29:55.347849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.415 [2024-11-17 09:29:55.347882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.415 [2024-11-17 09:29:55.348040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.415 [2024-11-17 09:29:55.348063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.415 [2024-11-17 09:29:55.348076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.348100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.415 [2024-11-17 09:29:55.348127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.348142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.348154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.415 [2024-11-17 09:29:55.348196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.415 [2024-11-17 09:29:55.348237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.415 [2024-11-17 09:29:55.352399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.415 [2024-11-17 09:29:55.352424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.415 [2024-11-17 09:29:55.352437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.352448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.415 [2024-11-17 09:29:55.352463] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:50.415 [2024-11-17 09:29:55.352476] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:50.415 [2024-11-17 09:29:55.352503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.352519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.352537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:29:50.415 [2024-11-17 09:29:55.352557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.415 [2024-11-17 09:29:55.352590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:29:50.415 [2024-11-17 09:29:55.352744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:50.415 [2024-11-17 09:29:55.352782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:50.415 [2024-11-17 09:29:55.352794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:50.415 [2024-11-17 09:29:55.352806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:29:50.415 [2024-11-17 09:29:55.352831] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:29:50.415 0% 00:29:50.415 Data Units Read: 0 00:29:50.415 Data Units Written: 0 00:29:50.415 Host Read Commands: 0 00:29:50.415 Host Write Commands: 0 00:29:50.415 Controller Busy Time: 0 minutes 00:29:50.415 Power Cycles: 0 00:29:50.415 Power On Hours: 0 hours 00:29:50.415 Unsafe Shutdowns: 0 00:29:50.415 Unrecoverable Media Errors: 0 00:29:50.415 Lifetime Error Log Entries: 0 00:29:50.415 Warning Temperature Time: 0 minutes 00:29:50.415 Critical Temperature Time: 0 minutes 00:29:50.415 00:29:50.415 Number of Queues 00:29:50.415 ================ 00:29:50.415 Number of I/O Submission Queues: 127 00:29:50.415 Number of I/O Completion Queues: 127 00:29:50.415 00:29:50.415 Active Namespaces 00:29:50.415 ================= 00:29:50.415 Namespace ID:1 00:29:50.415 Error Recovery Timeout: Unlimited 00:29:50.415 Command Set Identifier: NVM (00h) 00:29:50.415 Deallocate: Supported 00:29:50.415 Deallocated/Unwritten Error: Not Supported 00:29:50.415 Deallocated Read Value: Unknown 00:29:50.415 Deallocate in Write Zeroes: Not Supported 00:29:50.415 Deallocated Guard Field: 0xFFFF 00:29:50.415 Flush: Supported 00:29:50.415 Reservation: Supported 00:29:50.415 Namespace Sharing Capabilities: Multiple Controllers 00:29:50.415 Size (in LBAs): 131072 (0GiB) 00:29:50.415 Capacity (in LBAs): 131072 (0GiB) 00:29:50.415 Utilization (in LBAs): 131072 (0GiB) 00:29:50.415 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:50.415 EUI64: ABCDEF0123456789 00:29:50.415 UUID: a8a5f096-6e4e-4d6a-86ee-7e6fad94259d 00:29:50.415 Thin Provisioning: Not Supported 00:29:50.415 Per-NS Atomic Units: Yes 00:29:50.415 Atomic Boundary Size (Normal): 0 00:29:50.415 Atomic Boundary Size (PFail): 0 00:29:50.415 Atomic Boundary Offset: 0 00:29:50.415 Maximum Single Source Range Length: 65535 00:29:50.415 Maximum Copy Length: 65535 00:29:50.415 Maximum Source Range Count: 1 00:29:50.415 NGUID/EUI64 Never Reused: No 00:29:50.415 Namespace Write Protected: No 00:29:50.415 Number of LBA Formats: 1 00:29:50.415 Current LBA Format: LBA Format #00 00:29:50.415 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:50.415 00:29:50.415 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.674 rmmod nvme_tcp 00:29:50.674 rmmod nvme_fabrics 00:29:50.674 rmmod nvme_keyring 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3071431 ']' 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3071431 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3071431 ']' 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3071431 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071431 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071431' 00:29:50.674 killing process with pid 3071431 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3071431 00:29:50.674 09:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3071431 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.049 09:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.952 00:29:53.952 real 0m7.602s 00:29:53.952 user 0m11.460s 00:29:53.952 sys 0m2.159s 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.952 ************************************ 00:29:53.952 END TEST nvmf_identify 00:29:53.952 ************************************ 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.952 ************************************ 00:29:53.952 START TEST nvmf_perf 00:29:53.952 ************************************ 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:53.952 * Looking for test storage... 00:29:53.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.952 09:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.210 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:54.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.211 --rc genhtml_branch_coverage=1 00:29:54.211 --rc genhtml_function_coverage=1 00:29:54.211 --rc genhtml_legend=1 00:29:54.211 --rc geninfo_all_blocks=1 00:29:54.211 --rc geninfo_unexecuted_blocks=1 00:29:54.211 00:29:54.211 ' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:54.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.211 --rc genhtml_branch_coverage=1 00:29:54.211 --rc genhtml_function_coverage=1 00:29:54.211 --rc genhtml_legend=1 00:29:54.211 --rc geninfo_all_blocks=1 00:29:54.211 --rc geninfo_unexecuted_blocks=1 00:29:54.211 00:29:54.211 ' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:54.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.211 --rc genhtml_branch_coverage=1 00:29:54.211 --rc genhtml_function_coverage=1 00:29:54.211 --rc genhtml_legend=1 00:29:54.211 --rc geninfo_all_blocks=1 00:29:54.211 --rc geninfo_unexecuted_blocks=1 00:29:54.211 00:29:54.211 ' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:54.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.211 --rc genhtml_branch_coverage=1 00:29:54.211 --rc genhtml_function_coverage=1 00:29:54.211 --rc genhtml_legend=1 00:29:54.211 --rc geninfo_all_blocks=1 00:29:54.211 --rc geninfo_unexecuted_blocks=1 00:29:54.211 00:29:54.211 ' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:54.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.211 09:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:56.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:56.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:56.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:56.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.111 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:56.370 00:29:56.370 --- 10.0.0.2 ping statistics --- 00:29:56.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.370 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:29:56.370 00:29:56.370 --- 10.0.0.1 ping statistics --- 00:29:56.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.370 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3073784 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3073784 00:29:56.370 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3073784 ']' 00:29:56.371 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.371 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.371 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.371 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.371 09:30:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.371 [2024-11-17 09:30:01.321451] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:56.371 [2024-11-17 09:30:01.321597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.629 [2024-11-17 09:30:01.481883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.629 [2024-11-17 09:30:01.622076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.629 [2024-11-17 09:30:01.622148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.629 [2024-11-17 09:30:01.622174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.629 [2024-11-17 09:30:01.622195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.629 [2024-11-17 09:30:01.622213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.629 [2024-11-17 09:30:01.624782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.629 [2024-11-17 09:30:01.624847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.629 [2024-11-17 09:30:01.624905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.629 [2024-11-17 09:30:01.624909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:57.562 09:30:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:00.844 09:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:00.844 09:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:00.844 09:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:00.844 09:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:01.409 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:01.409 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:01.409 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:01.410 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:01.410 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:01.410 [2024-11-17 09:30:06.387945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.410 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.974 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:01.975 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.975 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:01.975 09:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:02.540 09:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.798 [2024-11-17 09:30:07.582470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.798 09:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.056 09:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:03.056 09:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:03.056 09:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:03.056 09:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:04.429 Initializing NVMe Controllers 00:30:04.429 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:04.429 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:04.429 Initialization complete. Launching workers. 00:30:04.429 ======================================================== 00:30:04.429 Latency(us) 00:30:04.429 Device Information : IOPS MiB/s Average min max 00:30:04.429 PCIE (0000:88:00.0) NSID 1 from core 0: 74183.87 289.78 430.81 47.73 6305.30 00:30:04.429 ======================================================== 00:30:04.429 Total : 74183.87 289.78 430.81 47.73 6305.30 00:30:04.429 00:30:04.429 09:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.802 Initializing NVMe Controllers 00:30:05.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:05.802 Initialization complete. Launching workers. 00:30:05.802 ======================================================== 00:30:05.802 Latency(us) 00:30:05.802 Device Information : IOPS MiB/s Average min max 00:30:05.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.87 0.32 12699.59 223.71 45042.44 00:30:05.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35.94 0.14 28927.54 7945.49 47920.80 00:30:05.802 ======================================================== 00:30:05.802 Total : 117.82 0.46 17650.49 223.71 47920.80 00:30:05.802 00:30:06.060 09:30:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.434 Initializing NVMe Controllers 00:30:07.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:07.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:07.434 Initialization complete. Launching workers. 00:30:07.434 ======================================================== 00:30:07.434 Latency(us) 00:30:07.434 Device Information : IOPS MiB/s Average min max 00:30:07.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5609.94 21.91 5706.45 1141.73 12056.04 00:30:07.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3807.03 14.87 8429.51 4668.50 16021.68 00:30:07.434 ======================================================== 00:30:07.434 Total : 9416.96 36.79 6807.31 1141.73 16021.68 00:30:07.434 00:30:07.434 09:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:07.434 09:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:07.434 09:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.716 Initializing NVMe Controllers 00:30:10.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.716 Controller IO queue size 128, less than required. 00:30:10.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.716 Controller IO queue size 128, less than required. 00:30:10.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.716 Initialization complete. Launching workers. 00:30:10.716 ======================================================== 00:30:10.716 Latency(us) 00:30:10.716 Device Information : IOPS MiB/s Average min max 00:30:10.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1302.61 325.65 101912.21 50563.56 289182.46 00:30:10.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 534.11 133.53 258912.00 128676.35 550192.65 00:30:10.716 ======================================================== 00:30:10.716 Total : 1836.72 459.18 147567.05 50563.56 550192.65 00:30:10.716 00:30:10.716 09:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:10.975 No valid NVMe controllers or AIO or URING devices found 00:30:10.975 Initializing NVMe Controllers 00:30:10.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.975 Controller IO queue size 128, less than required. 00:30:10.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.975 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:10.975 Controller IO queue size 128, less than required. 00:30:10.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.975 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:10.975 WARNING: Some requested NVMe devices were skipped 00:30:10.975 09:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:14.262 Initializing NVMe Controllers 00:30:14.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.262 Controller IO queue size 128, less than required. 00:30:14.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.262 Controller IO queue size 128, less than required. 00:30:14.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:14.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.262 Initialization complete. Launching workers. 00:30:14.262 00:30:14.262 ==================== 00:30:14.262 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:14.262 TCP transport: 00:30:14.262 polls: 6051 00:30:14.262 idle_polls: 3481 00:30:14.262 sock_completions: 2570 00:30:14.262 nvme_completions: 4955 00:30:14.262 submitted_requests: 7438 00:30:14.262 queued_requests: 1 00:30:14.262 00:30:14.262 ==================== 00:30:14.262 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:14.262 TCP transport: 00:30:14.262 polls: 6661 00:30:14.262 idle_polls: 4020 00:30:14.262 sock_completions: 2641 00:30:14.262 nvme_completions: 5045 00:30:14.262 submitted_requests: 7526 00:30:14.262 queued_requests: 1 00:30:14.262 ======================================================== 00:30:14.262 Latency(us) 00:30:14.262 Device Information : IOPS MiB/s Average min max 00:30:14.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1238.29 309.57 111527.82 65638.57 414704.62 00:30:14.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1260.79 315.20 102668.73 49225.70 328416.08 00:30:14.262 ======================================================== 00:30:14.262 Total : 2499.08 624.77 107058.40 49225.70 414704.62 00:30:14.262 00:30:14.262 09:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:14.262 09:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.544 09:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:14.544 09:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:14.544 09:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3cf6ac57-baf6-41bc-b9f9-3d97282723a5 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3cf6ac57-baf6-41bc-b9f9-3d97282723a5 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3cf6ac57-baf6-41bc-b9f9-3d97282723a5 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:17.866 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.124 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:18.124 { 00:30:18.124 "uuid": "3cf6ac57-baf6-41bc-b9f9-3d97282723a5", 00:30:18.124 "name": "lvs_0", 00:30:18.124 "base_bdev": "Nvme0n1", 00:30:18.124 "total_data_clusters": 238234, 00:30:18.124 "free_clusters": 238234, 00:30:18.124 "block_size": 512, 00:30:18.124 "cluster_size": 4194304 00:30:18.124 } 00:30:18.124 ]' 00:30:18.124 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3cf6ac57-baf6-41bc-b9f9-3d97282723a5") .free_clusters' 00:30:18.124 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:18.124 09:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3cf6ac57-baf6-41bc-b9f9-3d97282723a5") .cluster_size' 00:30:18.124 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:18.124 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:18.124 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:18.124 952936 00:30:18.124 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:18.124 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:18.124 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3cf6ac57-baf6-41bc-b9f9-3d97282723a5 lbd_0 20480 00:30:18.691 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=93c80f02-2d06-421b-a500-9853f35b6338 00:30:18.691 09:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 93c80f02-2d06-421b-a500-9853f35b6338 lvs_n_0 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2bff0dca-176d-491d-abe8-316fdc83d766 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2bff0dca-176d-491d-abe8-316fdc83d766 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=2bff0dca-176d-491d-abe8-316fdc83d766 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:19.625 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:19.883 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:19.883 { 00:30:19.883 "uuid": "3cf6ac57-baf6-41bc-b9f9-3d97282723a5", 00:30:19.883 "name": "lvs_0", 00:30:19.883 "base_bdev": "Nvme0n1", 00:30:19.883 "total_data_clusters": 238234, 00:30:19.883 "free_clusters": 233114, 00:30:19.883 "block_size": 512, 00:30:19.883 "cluster_size": 4194304 00:30:19.883 }, 00:30:19.883 { 00:30:19.883 "uuid": "2bff0dca-176d-491d-abe8-316fdc83d766", 00:30:19.883 "name": "lvs_n_0", 00:30:19.883 "base_bdev": "93c80f02-2d06-421b-a500-9853f35b6338", 00:30:19.883 "total_data_clusters": 5114, 00:30:19.883 "free_clusters": 5114, 00:30:19.883 "block_size": 512, 00:30:19.883 "cluster_size": 4194304 00:30:19.883 } 00:30:19.883 ]' 00:30:19.883 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2bff0dca-176d-491d-abe8-316fdc83d766") .free_clusters' 00:30:19.883 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:19.883 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2bff0dca-176d-491d-abe8-316fdc83d766") .cluster_size' 00:30:20.141 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:20.141 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:20.141 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:20.141 20456 00:30:20.142 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:20.142 09:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2bff0dca-176d-491d-abe8-316fdc83d766 lbd_nest_0 20456 00:30:20.399 09:30:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=91d2b9b7-3d53-4c58-b2d7-82fc1c97aaf0 00:30:20.399 09:30:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.656 09:30:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:20.656 09:30:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 91d2b9b7-3d53-4c58-b2d7-82fc1c97aaf0 00:30:20.914 09:30:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.172 09:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:21.172 09:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:21.172 09:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:21.172 09:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:21.172 09:30:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.375 Initializing NVMe Controllers 00:30:33.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:33.375 Initialization complete. Launching workers. 00:30:33.375 ======================================================== 00:30:33.375 Latency(us) 00:30:33.375 Device Information : IOPS MiB/s Average min max 00:30:33.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.60 0.02 19767.13 242.37 46282.42 00:30:33.376 ======================================================== 00:30:33.376 Total : 50.60 0.02 19767.13 242.37 46282.42 00:30:33.376 00:30:33.376 09:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.376 09:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.343 Initializing NVMe Controllers 00:30:43.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.343 Initialization complete. Launching workers. 00:30:43.343 ======================================================== 00:30:43.343 Latency(us) 00:30:43.343 Device Information : IOPS MiB/s Average min max 00:30:43.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 58.90 7.36 16986.20 7010.24 55854.96 00:30:43.343 ======================================================== 00:30:43.343 Total : 58.90 7.36 16986.20 7010.24 55854.96 00:30:43.343 00:30:43.343 09:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:43.343 09:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:43.343 09:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:53.327 Initializing NVMe Controllers 00:30:53.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:53.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:53.327 Initialization complete. Launching workers. 00:30:53.327 ======================================================== 00:30:53.327 Latency(us) 00:30:53.327 Device Information : IOPS MiB/s Average min max 00:30:53.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4712.40 2.30 6793.80 481.37 12963.73 00:30:53.327 ======================================================== 00:30:53.327 Total : 4712.40 2.30 6793.80 481.37 12963.73 00:30:53.327 00:30:53.327 09:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:53.327 09:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.300 Initializing NVMe Controllers 00:31:03.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.300 Initialization complete. Launching workers. 00:31:03.300 ======================================================== 00:31:03.300 Latency(us) 00:31:03.300 Device Information : IOPS MiB/s Average min max 00:31:03.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3360.09 420.01 9529.00 918.72 23089.68 00:31:03.300 ======================================================== 00:31:03.300 Total : 3360.09 420.01 9529.00 918.72 23089.68 00:31:03.300 00:31:03.300 09:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:03.300 09:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:03.300 09:31:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:15.495 Initializing NVMe Controllers 00:31:15.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:15.495 Controller IO queue size 128, less than required. 00:31:15.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:15.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:15.495 Initialization complete. Launching workers. 00:31:15.495 ======================================================== 00:31:15.495 Latency(us) 00:31:15.495 Device Information : IOPS MiB/s Average min max 00:31:15.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8462.74 4.13 15137.22 1826.56 33173.56 00:31:15.495 ======================================================== 00:31:15.495 Total : 8462.74 4.13 15137.22 1826.56 33173.56 00:31:15.495 00:31:15.495 09:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:15.495 09:31:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:25.464 Initializing NVMe Controllers 00:31:25.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.464 Controller IO queue size 128, less than required. 00:31:25.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.464 Initialization complete. Launching workers. 00:31:25.464 ======================================================== 00:31:25.464 Latency(us) 00:31:25.464 Device Information : IOPS MiB/s Average min max 00:31:25.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1139.80 142.47 112334.58 15683.15 262010.00 00:31:25.464 ======================================================== 00:31:25.464 Total : 1139.80 142.47 112334.58 15683.15 262010.00 00:31:25.464 00:31:25.464 09:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.464 09:31:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 91d2b9b7-3d53-4c58-b2d7-82fc1c97aaf0 00:31:25.721 09:31:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:25.979 09:31:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 93c80f02-2d06-421b-a500-9853f35b6338 00:31:26.236 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.494 rmmod nvme_tcp 00:31:26.494 rmmod nvme_fabrics 00:31:26.494 rmmod nvme_keyring 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3073784 ']' 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3073784 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3073784 ']' 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3073784 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3073784 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3073784' 00:31:26.494 killing process with pid 3073784 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3073784 00:31:26.494 09:31:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3073784 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.020 09:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.007 00:31:31.007 real 1m37.006s 00:31:31.007 user 5m57.685s 00:31:31.007 sys 0m16.008s 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:31.007 ************************************ 00:31:31.007 END TEST nvmf_perf 00:31:31.007 ************************************ 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.007 ************************************ 00:31:31.007 START TEST nvmf_fio_host 00:31:31.007 ************************************ 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:31.007 * Looking for test storage... 00:31:31.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:31.007 09:31:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:31.266 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.267 --rc genhtml_branch_coverage=1 00:31:31.267 --rc genhtml_function_coverage=1 00:31:31.267 --rc genhtml_legend=1 00:31:31.267 --rc geninfo_all_blocks=1 00:31:31.267 --rc geninfo_unexecuted_blocks=1 00:31:31.267 00:31:31.267 ' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.267 --rc genhtml_branch_coverage=1 00:31:31.267 --rc genhtml_function_coverage=1 00:31:31.267 --rc genhtml_legend=1 00:31:31.267 --rc geninfo_all_blocks=1 00:31:31.267 --rc geninfo_unexecuted_blocks=1 00:31:31.267 00:31:31.267 ' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.267 --rc genhtml_branch_coverage=1 00:31:31.267 --rc genhtml_function_coverage=1 00:31:31.267 --rc genhtml_legend=1 00:31:31.267 --rc geninfo_all_blocks=1 00:31:31.267 --rc geninfo_unexecuted_blocks=1 00:31:31.267 00:31:31.267 ' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.267 --rc genhtml_branch_coverage=1 00:31:31.267 --rc genhtml_function_coverage=1 00:31:31.267 --rc genhtml_legend=1 00:31:31.267 --rc geninfo_all_blocks=1 00:31:31.267 --rc geninfo_unexecuted_blocks=1 00:31:31.267 00:31:31.267 ' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.267 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.268 09:31:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:33.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:33.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:33.171 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:33.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:33.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:33.172 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:33.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:31:33.430 00:31:33.430 --- 10.0.0.2 ping statistics --- 00:31:33.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.430 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:31:33.430 00:31:33.430 --- 10.0.0.1 ping statistics --- 00:31:33.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.430 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:33.430 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3087035 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3087035 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3087035 ']' 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.431 09:31:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.431 [2024-11-17 09:31:38.386479] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:31:33.431 [2024-11-17 09:31:38.386618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.689 [2024-11-17 09:31:38.532502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.689 [2024-11-17 09:31:38.655690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.689 [2024-11-17 09:31:38.655767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.689 [2024-11-17 09:31:38.655789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.689 [2024-11-17 09:31:38.655808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.689 [2024-11-17 09:31:38.655824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.689 [2024-11-17 09:31:38.658499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.689 [2024-11-17 09:31:38.658540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.689 [2024-11-17 09:31:38.658582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.689 [2024-11-17 09:31:38.658587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.622 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.622 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:34.622 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:34.878 [2024-11-17 09:31:39.678456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.878 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:34.878 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.878 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.878 09:31:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:35.136 Malloc1 00:31:35.136 09:31:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.393 09:31:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:35.956 09:31:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.956 [2024-11-17 09:31:40.927414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.956 09:31:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:36.520 09:31:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.520 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:36.520 fio-3.35 00:31:36.520 Starting 1 thread 00:31:39.047 00:31:39.047 test: (groupid=0, jobs=1): err= 0: pid=3087517: Sun Nov 17 09:31:44 2024 00:31:39.047 read: IOPS=6362, BW=24.9MiB/s (26.1MB/s)(49.9MiB/2009msec) 00:31:39.047 slat (usec): min=3, max=148, avg= 3.82, stdev= 2.32 00:31:39.047 clat (usec): min=3824, max=18631, avg=10897.90, stdev=956.15 00:31:39.047 lat (usec): min=3870, max=18634, avg=10901.71, stdev=956.01 00:31:39.047 clat percentiles (usec): 00:31:39.047 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:31:39.047 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:31:39.047 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:31:39.047 | 99.00th=[13042], 99.50th=[13435], 99.90th=[16188], 99.95th=[17433], 00:31:39.047 | 99.99th=[18482] 00:31:39.047 bw ( KiB/s): min=24296, max=25952, per=99.98%, avg=25444.00, stdev=772.94, samples=4 00:31:39.047 iops : min= 6074, max= 6488, avg=6361.00, stdev=193.24, samples=4 00:31:39.047 write: IOPS=6363, BW=24.9MiB/s (26.1MB/s)(49.9MiB/2009msec); 0 zone resets 00:31:39.047 slat (usec): min=3, max=113, avg= 3.92, stdev= 1.65 00:31:39.047 clat (usec): min=1629, max=17625, avg=9118.43, stdev=828.91 00:31:39.047 lat (usec): min=1649, max=17629, avg=9122.35, stdev=828.81 00:31:39.047 clat percentiles (usec): 00:31:39.047 | 1.00th=[ 7308], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8586], 00:31:39.047 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:31:39.047 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:31:39.047 | 99.00th=[10814], 99.50th=[11076], 99.90th=[16909], 99.95th=[17171], 00:31:39.047 | 99.99th=[17695] 00:31:39.047 bw ( KiB/s): min=25240, max=25552, per=99.92%, avg=25434.00, stdev=134.72, samples=4 00:31:39.047 iops : min= 6310, max= 6388, avg=6358.50, stdev=33.68, samples=4 00:31:39.047 lat (msec) : 2=0.01%, 4=0.08%, 10=52.39%, 20=47.52% 00:31:39.047 cpu : usr=69.32%, sys=29.13%, ctx=55, majf=0, minf=1547 00:31:39.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:39.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.047 issued rwts: total=12782,12784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.047 00:31:39.047 Run status group 0 (all jobs): 00:31:39.047 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.4MB), run=2009-2009msec 00:31:39.047 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.4MB), run=2009-2009msec 00:31:39.612 ----------------------------------------------------- 00:31:39.612 Suppressions used: 00:31:39.612 count bytes template 00:31:39.612 1 57 /usr/src/fio/parse.c 00:31:39.612 1 8 libtcmalloc_minimal.so 00:31:39.612 ----------------------------------------------------- 00:31:39.612 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:39.612 09:31:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:39.870 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:39.870 fio-3.35 00:31:39.870 Starting 1 thread 00:31:42.393 00:31:42.393 test: (groupid=0, jobs=1): err= 0: pid=3087968: Sun Nov 17 09:31:47 2024 00:31:42.393 read: IOPS=6074, BW=94.9MiB/s (99.5MB/s)(191MiB/2012msec) 00:31:42.393 slat (usec): min=3, max=128, avg= 5.33, stdev= 2.42 00:31:42.393 clat (usec): min=4608, max=58492, avg=12263.64, stdev=4782.99 00:31:42.393 lat (usec): min=4613, max=58498, avg=12268.96, stdev=4783.04 00:31:42.394 clat percentiles (usec): 00:31:42.394 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9765], 00:31:42.394 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:31:42.394 | 70.00th=[13042], 80.00th=[13698], 90.00th=[15139], 95.00th=[16450], 00:31:42.394 | 99.00th=[46400], 99.50th=[52167], 99.90th=[57934], 99.95th=[57934], 00:31:42.394 | 99.99th=[58459] 00:31:42.394 bw ( KiB/s): min=32960, max=58912, per=49.04%, avg=47664.00, stdev=11736.30, samples=4 00:31:42.394 iops : min= 2060, max= 3682, avg=2979.00, stdev=733.52, samples=4 00:31:42.394 write: IOPS=3598, BW=56.2MiB/s (59.0MB/s)(97.9MiB/1742msec); 0 zone resets 00:31:42.394 slat (usec): min=33, max=272, avg=37.35, stdev= 7.49 00:31:42.394 clat (usec): min=6948, max=29799, avg=15912.42, stdev=2437.95 00:31:42.394 lat (usec): min=6982, max=29835, avg=15949.77, stdev=2438.17 00:31:42.394 clat percentiles (usec): 00:31:42.394 | 1.00th=[10421], 5.00th=[11994], 10.00th=[12780], 20.00th=[13960], 00:31:42.394 | 30.00th=[14746], 40.00th=[15401], 50.00th=[15926], 60.00th=[16450], 00:31:42.394 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19006], 95.00th=[20055], 00:31:42.394 | 99.00th=[21627], 99.50th=[22152], 99.90th=[29230], 99.95th=[29492], 00:31:42.394 | 99.99th=[29754] 00:31:42.394 bw ( KiB/s): min=33984, max=63584, per=86.18%, avg=49616.00, stdev=13035.97, samples=4 00:31:42.394 iops : min= 2124, max= 3974, avg=3101.00, stdev=814.75, samples=4 00:31:42.394 lat (msec) : 10=15.04%, 20=82.33%, 50=2.15%, 100=0.48% 00:31:42.394 cpu : usr=78.33%, sys=20.43%, ctx=36, majf=0, minf=2104 00:31:42.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:31:42.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.394 issued rwts: total=12222,6268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.394 00:31:42.394 Run status group 0 (all jobs): 00:31:42.394 READ: bw=94.9MiB/s (99.5MB/s), 94.9MiB/s-94.9MiB/s (99.5MB/s-99.5MB/s), io=191MiB (200MB), run=2012-2012msec 00:31:42.394 WRITE: bw=56.2MiB/s (59.0MB/s), 56.2MiB/s-56.2MiB/s (59.0MB/s-59.0MB/s), io=97.9MiB (103MB), run=1742-1742msec 00:31:42.394 ----------------------------------------------------- 00:31:42.394 Suppressions used: 00:31:42.394 count bytes template 00:31:42.394 1 57 /usr/src/fio/parse.c 00:31:42.394 66 6336 /usr/src/fio/iolog.c 00:31:42.394 1 8 libtcmalloc_minimal.so 00:31:42.394 ----------------------------------------------------- 00:31:42.394 00:31:42.394 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:42.651 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:42.908 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:42.908 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:42.908 09:31:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:46.185 Nvme0n1 00:31:46.185 09:31:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=22c66e22-6add-4b1c-b49d-f0f6216f307b 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 22c66e22-6add-4b1c-b49d-f0f6216f307b 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=22c66e22-6add-4b1c-b49d-f0f6216f307b 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:49.474 09:31:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.474 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:49.474 { 00:31:49.474 "uuid": "22c66e22-6add-4b1c-b49d-f0f6216f307b", 00:31:49.474 "name": "lvs_0", 00:31:49.474 "base_bdev": "Nvme0n1", 00:31:49.474 "total_data_clusters": 930, 00:31:49.474 "free_clusters": 930, 00:31:49.474 "block_size": 512, 00:31:49.474 "cluster_size": 1073741824 00:31:49.474 } 00:31:49.474 ]' 00:31:49.474 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="22c66e22-6add-4b1c-b49d-f0f6216f307b") .free_clusters' 00:31:49.475 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:49.475 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="22c66e22-6add-4b1c-b49d-f0f6216f307b") .cluster_size' 00:31:49.475 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:49.475 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:49.475 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:49.475 952320 00:31:49.475 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:49.732 c2fe9f36-60fd-42b2-b147-fa5bf0f6e145 00:31:49.732 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:49.990 09:31:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:50.247 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:50.505 09:31:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.762 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:50.762 fio-3.35 00:31:50.762 Starting 1 thread 00:31:53.286 00:31:53.286 test: (groupid=0, jobs=1): err= 0: pid=3089345: Sun Nov 17 09:31:58 2024 00:31:53.286 read: IOPS=4315, BW=16.9MiB/s (17.7MB/s)(33.9MiB/2008msec) 00:31:53.286 slat (usec): min=3, max=168, avg= 3.80, stdev= 2.65 00:31:53.286 clat (usec): min=1130, max=172943, avg=16038.94, stdev=13320.18 00:31:53.286 lat (usec): min=1134, max=173000, avg=16042.74, stdev=13320.57 00:31:53.286 clat percentiles (msec): 00:31:53.286 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:31:53.286 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:31:53.286 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 17], 95.00th=[ 18], 00:31:53.286 | 99.00th=[ 23], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:31:53.286 | 99.99th=[ 174] 00:31:53.286 bw ( KiB/s): min=12168, max=19024, per=99.43%, avg=17164.00, stdev=3338.00, samples=4 00:31:53.286 iops : min= 3042, max= 4756, avg=4291.00, stdev=834.50, samples=4 00:31:53.286 write: IOPS=4310, BW=16.8MiB/s (17.7MB/s)(33.8MiB/2008msec); 0 zone resets 00:31:53.286 slat (usec): min=3, max=127, avg= 3.93, stdev= 2.08 00:31:53.286 clat (usec): min=421, max=170367, avg=13428.09, stdev=12519.91 00:31:53.286 lat (usec): min=426, max=170374, avg=13432.03, stdev=12520.31 00:31:53.286 clat percentiles (msec): 00:31:53.286 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:31:53.286 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 13], 00:31:53.286 | 70.00th=[ 14], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 15], 00:31:53.286 | 99.00th=[ 16], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:31:53.286 | 99.99th=[ 171] 00:31:53.286 bw ( KiB/s): min=12904, max=18816, per=99.90%, avg=17226.00, stdev=2886.23, samples=4 00:31:53.286 iops : min= 3226, max= 4704, avg=4306.50, stdev=721.56, samples=4 00:31:53.286 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:53.286 lat (msec) : 2=0.04%, 4=0.08%, 10=1.28%, 20=97.70%, 50=0.15% 00:31:53.286 lat (msec) : 250=0.74% 00:31:53.286 cpu : usr=68.71%, sys=29.95%, ctx=97, majf=0, minf=1544 00:31:53.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:53.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.286 issued rwts: total=8666,8656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.286 00:31:53.286 Run status group 0 (all jobs): 00:31:53.286 READ: bw=16.9MiB/s (17.7MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=33.9MiB (35.5MB), run=2008-2008msec 00:31:53.286 WRITE: bw=16.8MiB/s (17.7MB/s), 16.8MiB/s-16.8MiB/s (17.7MB/s-17.7MB/s), io=33.8MiB (35.5MB), run=2008-2008msec 00:31:53.286 ----------------------------------------------------- 00:31:53.286 Suppressions used: 00:31:53.286 count bytes template 00:31:53.287 1 58 /usr/src/fio/parse.c 00:31:53.287 1 8 libtcmalloc_minimal.so 00:31:53.287 ----------------------------------------------------- 00:31:53.287 00:31:53.287 09:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:53.852 09:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3e57332d-c403-4688-9c23-c77c39c13fcd 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3e57332d-c403-4688-9c23-c77c39c13fcd 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=3e57332d-c403-4688-9c23-c77c39c13fcd 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:54.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:55.042 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:55.042 { 00:31:55.042 "uuid": "22c66e22-6add-4b1c-b49d-f0f6216f307b", 00:31:55.042 "name": "lvs_0", 00:31:55.042 "base_bdev": "Nvme0n1", 00:31:55.042 "total_data_clusters": 930, 00:31:55.042 "free_clusters": 0, 00:31:55.042 "block_size": 512, 00:31:55.042 "cluster_size": 1073741824 00:31:55.042 }, 00:31:55.042 { 00:31:55.042 "uuid": "3e57332d-c403-4688-9c23-c77c39c13fcd", 00:31:55.042 "name": "lvs_n_0", 00:31:55.042 "base_bdev": "c2fe9f36-60fd-42b2-b147-fa5bf0f6e145", 00:31:55.042 "total_data_clusters": 237847, 00:31:55.042 "free_clusters": 237847, 00:31:55.042 "block_size": 512, 00:31:55.042 "cluster_size": 4194304 00:31:55.042 } 00:31:55.042 ]' 00:31:55.042 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3e57332d-c403-4688-9c23-c77c39c13fcd") .free_clusters' 00:31:55.300 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:55.300 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3e57332d-c403-4688-9c23-c77c39c13fcd") .cluster_size' 00:31:55.300 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:55.300 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:55.300 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:55.300 951388 00:31:55.300 09:32:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:56.233 b71984e4-3423-4fb5-b0a6-4290a8b13391 00:31:56.233 09:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:56.798 09:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:56.798 09:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:57.363 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:57.364 09:32:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:57.364 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:57.364 fio-3.35 00:31:57.364 Starting 1 thread 00:31:59.892 00:31:59.892 test: (groupid=0, jobs=1): err= 0: pid=3090148: Sun Nov 17 09:32:04 2024 00:31:59.892 read: IOPS=4399, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2012msec) 00:31:59.892 slat (usec): min=2, max=164, avg= 3.65, stdev= 2.47 00:31:59.892 clat (usec): min=5993, max=26354, avg=15853.46, stdev=1498.55 00:31:59.892 lat (usec): min=6006, max=26357, avg=15857.11, stdev=1498.41 00:31:59.892 clat percentiles (usec): 00:31:59.892 | 1.00th=[12649], 5.00th=[13566], 10.00th=[14091], 20.00th=[14615], 00:31:59.892 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:31:59.892 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:31:59.892 | 99.00th=[19268], 99.50th=[19792], 99.90th=[24511], 99.95th=[24773], 00:31:59.892 | 99.99th=[26346] 00:31:59.892 bw ( KiB/s): min=16520, max=17984, per=99.92%, avg=17584.00, stdev=710.12, samples=4 00:31:59.892 iops : min= 4130, max= 4496, avg=4396.00, stdev=177.53, samples=4 00:31:59.892 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2012msec); 0 zone resets 00:31:59.892 slat (usec): min=3, max=104, avg= 3.75, stdev= 1.64 00:31:59.892 clat (usec): min=2856, max=24547, avg=13074.71, stdev=1276.77 00:31:59.892 lat (usec): min=2864, max=24551, avg=13078.46, stdev=1276.73 00:31:59.892 clat percentiles (usec): 00:31:59.892 | 1.00th=[10290], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:31:59.892 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:31:59.892 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:31:59.892 | 99.00th=[15795], 99.50th=[16909], 99.90th=[22676], 99.95th=[22938], 00:31:59.892 | 99.99th=[24511] 00:31:59.892 bw ( KiB/s): min=17448, max=17728, per=99.84%, avg=17592.00, stdev=118.12, samples=4 00:31:59.892 iops : min= 4362, max= 4432, avg=4398.00, stdev=29.53, samples=4 00:31:59.892 lat (msec) : 4=0.02%, 10=0.45%, 20=99.24%, 50=0.30% 00:31:59.892 cpu : usr=68.22%, sys=30.53%, ctx=97, majf=0, minf=1544 00:31:59.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:59.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:59.892 issued rwts: total=8852,8863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:59.892 00:31:59.892 Run status group 0 (all jobs): 00:31:59.892 READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.3MB), run=2012-2012msec 00:31:59.892 WRITE: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.3MB), run=2012-2012msec 00:32:00.149 ----------------------------------------------------- 00:32:00.149 Suppressions used: 00:32:00.150 count bytes template 00:32:00.150 1 58 /usr/src/fio/parse.c 00:32:00.150 1 8 libtcmalloc_minimal.so 00:32:00.150 ----------------------------------------------------- 00:32:00.150 00:32:00.150 09:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:00.407 09:32:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:00.407 09:32:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:05.667 09:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:05.667 09:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:08.194 09:32:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:08.195 09:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:10.093 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:10.093 rmmod nvme_tcp 00:32:10.352 rmmod nvme_fabrics 00:32:10.352 rmmod nvme_keyring 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3087035 ']' 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3087035 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3087035 ']' 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3087035 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3087035 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3087035' 00:32:10.352 killing process with pid 3087035 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3087035 00:32:10.352 09:32:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3087035 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.839 09:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.740 09:32:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.740 00:32:13.740 real 0m42.586s 00:32:13.740 user 2m42.568s 00:32:13.740 sys 0m8.292s 00:32:13.740 09:32:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.740 09:32:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.740 ************************************ 00:32:13.740 END TEST nvmf_fio_host 00:32:13.740 ************************************ 00:32:13.740 09:32:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.741 ************************************ 00:32:13.741 START TEST nvmf_failover 00:32:13.741 ************************************ 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:13.741 * Looking for test storage... 00:32:13.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:13.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.741 --rc genhtml_branch_coverage=1 00:32:13.741 --rc genhtml_function_coverage=1 00:32:13.741 --rc genhtml_legend=1 00:32:13.741 --rc geninfo_all_blocks=1 00:32:13.741 --rc geninfo_unexecuted_blocks=1 00:32:13.741 00:32:13.741 ' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:13.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.741 --rc genhtml_branch_coverage=1 00:32:13.741 --rc genhtml_function_coverage=1 00:32:13.741 --rc genhtml_legend=1 00:32:13.741 --rc geninfo_all_blocks=1 00:32:13.741 --rc geninfo_unexecuted_blocks=1 00:32:13.741 00:32:13.741 ' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:13.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.741 --rc genhtml_branch_coverage=1 00:32:13.741 --rc genhtml_function_coverage=1 00:32:13.741 --rc genhtml_legend=1 00:32:13.741 --rc geninfo_all_blocks=1 00:32:13.741 --rc geninfo_unexecuted_blocks=1 00:32:13.741 00:32:13.741 ' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:13.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.741 --rc genhtml_branch_coverage=1 00:32:13.741 --rc genhtml_function_coverage=1 00:32:13.741 --rc genhtml_legend=1 00:32:13.741 --rc geninfo_all_blocks=1 00:32:13.741 --rc geninfo_unexecuted_blocks=1 00:32:13.741 00:32:13.741 ' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.741 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:13.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:13.742 09:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:16.274 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:16.274 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:16.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.274 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:16.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:16.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:32:16.275 00:32:16.275 --- 10.0.0.2 ping statistics --- 00:32:16.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.275 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:32:16.275 00:32:16.275 --- 10.0.0.1 ping statistics --- 00:32:16.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.275 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3093722 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3093722 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3093722 ']' 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.275 09:32:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:16.275 [2024-11-17 09:32:21.022669] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:16.275 [2024-11-17 09:32:21.022826] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.275 [2024-11-17 09:32:21.169328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:16.533 [2024-11-17 09:32:21.306403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.533 [2024-11-17 09:32:21.306469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.533 [2024-11-17 09:32:21.306496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.533 [2024-11-17 09:32:21.306521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.533 [2024-11-17 09:32:21.306541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.533 [2024-11-17 09:32:21.309156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:16.533 [2024-11-17 09:32:21.309248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.533 [2024-11-17 09:32:21.309253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.100 09:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:17.357 [2024-11-17 09:32:22.242614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.357 09:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:17.615 Malloc0 00:32:17.615 09:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:17.872 09:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:18.438 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.438 [2024-11-17 09:32:23.414120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.438 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:18.696 [2024-11-17 09:32:23.675015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:18.696 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:18.955 [2024-11-17 09:32:23.935887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3094027 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3094027 /var/tmp/bdevperf.sock 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3094027 ']' 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.955 09:32:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:20.329 09:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.329 09:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:20.329 09:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:20.587 NVMe0n1 00:32:20.587 09:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:21.153 00:32:21.153 09:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3094293 00:32:21.153 09:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:21.153 09:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:22.086 09:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:22.344 [2024-11-17 09:32:27.252237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.344 [2024-11-17 09:32:27.252788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.252986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 [2024-11-17 09:32:27.253319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:22.345 09:32:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:25.628 09:32:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:25.886 00:32:25.886 09:32:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:26.144 [2024-11-17 09:32:30.942273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.144 [2024-11-17 09:32:30.942361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.144 [2024-11-17 09:32:30.942395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.942992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 [2024-11-17 09:32:30.943148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:26.145 09:32:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:29.428 09:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.428 [2024-11-17 09:32:34.214065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.428 09:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:30.363 09:32:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:30.622 [2024-11-17 09:32:35.546637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.546987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 [2024-11-17 09:32:35.547348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:32:30.622 09:32:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3094293 00:32:37.193 { 00:32:37.193 "results": [ 00:32:37.193 { 00:32:37.193 "job": "NVMe0n1", 00:32:37.193 "core_mask": "0x1", 00:32:37.193 "workload": "verify", 00:32:37.193 "status": "finished", 00:32:37.193 "verify_range": { 00:32:37.193 "start": 0, 00:32:37.193 "length": 16384 00:32:37.193 }, 00:32:37.193 "queue_depth": 128, 00:32:37.193 "io_size": 4096, 00:32:37.193 "runtime": 15.005814, 00:32:37.193 "iops": 6099.302576987826, 00:32:37.193 "mibps": 23.825400691358695, 00:32:37.193 "io_failed": 10357, 00:32:37.193 "io_timeout": 0, 00:32:37.193 "avg_latency_us": 18817.153401255047, 00:32:37.193 "min_latency_us": 743.3481481481482, 00:32:37.193 "max_latency_us": 22816.237037037037 00:32:37.193 } 00:32:37.193 ], 00:32:37.193 "core_count": 1 00:32:37.193 } 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3094027 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3094027 ']' 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3094027 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094027 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094027' 00:32:37.193 killing process with pid 3094027 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3094027 00:32:37.193 09:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3094027 00:32:37.193 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:37.193 [2024-11-17 09:32:24.039918] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:37.193 [2024-11-17 09:32:24.040079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094027 ] 00:32:37.193 [2024-11-17 09:32:24.177950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.193 [2024-11-17 09:32:24.304703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.193 Running I/O for 15 seconds... 00:32:37.193 6125.00 IOPS, 23.93 MiB/s [2024-11-17T08:32:42.206Z] [2024-11-17 09:32:27.254606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.193 [2024-11-17 09:32:27.254669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.193 [2024-11-17 09:32:27.254737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.193 [2024-11-17 09:32:27.254761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.193 [2024-11-17 09:32:27.254785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.193 [2024-11-17 09:32:27.254806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.193 [2024-11-17 09:32:27.254830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.193 [2024-11-17 09:32:27.254851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.193 [2024-11-17 09:32:27.254873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.193 [2024-11-17 09:32:27.254894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.193 [2024-11-17 09:32:27.254918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.193 [2024-11-17 09:32:27.254940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.193 [2024-11-17 09:32:27.254963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.193 [2024-11-17 09:32:27.254983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.255971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.255993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.194 [2024-11-17 09:32:27.256436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.194 [2024-11-17 09:32:27.256459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.256970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.256997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.195 [2024-11-17 09:32:27.257432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.195 [2024-11-17 09:32:27.257690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.195 [2024-11-17 09:32:27.257711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.257735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.257757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.257780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.257801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.257824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.257844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.257868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.257889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.257912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.257933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.257970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.257992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.196 [2024-11-17 09:32:27.258212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.196 [2024-11-17 09:32:27.258271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.258981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.259002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.259025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.259047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.259070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.259090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.259113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.196 [2024-11-17 09:32:27.259160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.196 [2024-11-17 09:32:27.259182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.197 [2024-11-17 09:32:27.259835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.259907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57448 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.259929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.259956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.259975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.259995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57456 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57464 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57472 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57480 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57488 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57496 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57504 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.197 [2024-11-17 09:32:27.260531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.197 [2024-11-17 09:32:27.260548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.197 [2024-11-17 09:32:27.260566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57512 len:8 PRP1 0x0 PRP2 0x0 00:32:37.197 [2024-11-17 09:32:27.260585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.260605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.260622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.260639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57520 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.260663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.260700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.260723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.260741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57528 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.260760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.260780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.260796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.260814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57536 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.260852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.260868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.260885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57544 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.260905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.260924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.260946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.260964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57552 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.260983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.261020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.261037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57560 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.261056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.261092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.261109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57568 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.261141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.261178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.261196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57576 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.261215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.261251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.261272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57584 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.261292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.198 [2024-11-17 09:32:27.261334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.198 [2024-11-17 09:32:27.261352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57592 len:8 PRP1 0x0 PRP2 0x0 00:32:37.198 [2024-11-17 09:32:27.261393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261705] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:37.198 [2024-11-17 09:32:27.261780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.198 [2024-11-17 09:32:27.261809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.198 [2024-11-17 09:32:27.261855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.198 [2024-11-17 09:32:27.261897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.198 [2024-11-17 09:32:27.261939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:27.261966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:37.198 [2024-11-17 09:32:27.262067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:32:37.198 [2024-11-17 09:32:27.265857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:37.198 [2024-11-17 09:32:27.420648] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:37.198 5684.50 IOPS, 22.21 MiB/s [2024-11-17T08:32:42.211Z] 5898.67 IOPS, 23.04 MiB/s [2024-11-17T08:32:42.211Z] 5997.50 IOPS, 23.43 MiB/s [2024-11-17T08:32:42.211Z] [2024-11-17 09:32:30.943697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.198 [2024-11-17 09:32:30.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.198 [2024-11-17 09:32:30.943796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.198 [2024-11-17 09:32:30.943819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.943846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.943868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.943891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.943918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.943941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.943964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.943987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.944962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.944983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.199 [2024-11-17 09:32:30.945006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.199 [2024-11-17 09:32:30.945026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.200 [2024-11-17 09:32:30.945817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.945861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.945904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.945946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.945969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.945989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.200 [2024-11-17 09:32:30.946601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.200 [2024-11-17 09:32:30.946623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.946959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.946983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.201 [2024-11-17 09:32:30.947804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.947962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.947983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.948007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.201 [2024-11-17 09:32:30.948028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.201 [2024-11-17 09:32:30.948051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.948966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.948986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.202 [2024-11-17 09:32:30.949313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.202 [2024-11-17 09:32:30.949418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13704 len:8 PRP1 0x0 PRP2 0x0 00:32:37.202 [2024-11-17 09:32:30.949442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.202 [2024-11-17 09:32:30.949474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.202 [2024-11-17 09:32:30.949494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.202 [2024-11-17 09:32:30.949521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13712 len:8 PRP1 0x0 PRP2 0x0 00:32:37.202 [2024-11-17 09:32:30.949544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.949565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.949583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.949600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13720 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.949620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.949640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.949657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.949690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.949709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.949729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.949746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.949763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13736 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.949782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.949802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.949818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.949835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13744 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.949854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.949874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.949897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.949914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13752 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.949933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.949952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.949969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.949986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.950004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.950040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.950057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13768 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.950080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.950116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.950138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13776 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.950158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.950194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.950210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13784 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.950229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.203 [2024-11-17 09:32:30.950264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.203 [2024-11-17 09:32:30.950281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:8 PRP1 0x0 PRP2 0x0 00:32:37.203 [2024-11-17 09:32:30.950300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950600] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:37.203 [2024-11-17 09:32:30.950659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.203 [2024-11-17 09:32:30.950686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.203 [2024-11-17 09:32:30.950731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.203 [2024-11-17 09:32:30.950773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.203 [2024-11-17 09:32:30.950815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:30.950841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:37.203 [2024-11-17 09:32:30.950925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:32:37.203 [2024-11-17 09:32:30.954763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:37.203 [2024-11-17 09:32:31.070222] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:37.203 5877.80 IOPS, 22.96 MiB/s [2024-11-17T08:32:42.216Z] 5952.50 IOPS, 23.25 MiB/s [2024-11-17T08:32:42.216Z] 6005.00 IOPS, 23.46 MiB/s [2024-11-17T08:32:42.216Z] 6040.50 IOPS, 23.60 MiB/s [2024-11-17T08:32:42.216Z] 6076.44 IOPS, 23.74 MiB/s [2024-11-17T08:32:42.216Z] [2024-11-17 09:32:35.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.203 [2024-11-17 09:32:35.548808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.203 [2024-11-17 09:32:35.548829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.548874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.548897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.548918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.548941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.548962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.548984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.549968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.549991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.550013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.550036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.550058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.550081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.550103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.550126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.550147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.204 [2024-11-17 09:32:35.550171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.204 [2024-11-17 09:32:35.550193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.205 [2024-11-17 09:32:35.550907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.550976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.550998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.551044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.551089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.551134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.551180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.551225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.205 [2024-11-17 09:32:35.551270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.205 [2024-11-17 09:32:35.551293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.551972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.551996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.206 [2024-11-17 09:32:35.552581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.206 [2024-11-17 09:32:35.552605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.206 [2024-11-17 09:32:35.552626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.207 [2024-11-17 09:32:35.552677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.207 [2024-11-17 09:32:35.552723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.207 [2024-11-17 09:32:35.552769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.207 [2024-11-17 09:32:35.552815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.552860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.552921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.552966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.552990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.207 [2024-11-17 09:32:35.553957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.207 [2024-11-17 09:32:35.553978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.208 [2024-11-17 09:32:35.554349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.208 [2024-11-17 09:32:35.554430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:8 PRP1 0x0 PRP2 0x0 00:32:37.208 [2024-11-17 09:32:35.554452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.208 [2024-11-17 09:32:35.554499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.208 [2024-11-17 09:32:35.554522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10248 len:8 PRP1 0x0 PRP2 0x0 00:32:37.208 [2024-11-17 09:32:35.554543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.208 [2024-11-17 09:32:35.554581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.208 [2024-11-17 09:32:35.554599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10256 len:8 PRP1 0x0 PRP2 0x0 00:32:37.208 [2024-11-17 09:32:35.554618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.554895] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:37.208 [2024-11-17 09:32:35.554953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.208 [2024-11-17 09:32:35.554979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.555003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.208 [2024-11-17 09:32:35.555023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.555044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.208 [2024-11-17 09:32:35.555065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.555086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.208 [2024-11-17 09:32:35.555106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.208 [2024-11-17 09:32:35.555126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:37.208 [2024-11-17 09:32:35.555193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:32:37.208 [2024-11-17 09:32:35.558995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:37.208 [2024-11-17 09:32:35.584143] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:37.208 6060.70 IOPS, 23.67 MiB/s [2024-11-17T08:32:42.221Z] 6071.64 IOPS, 23.72 MiB/s [2024-11-17T08:32:42.221Z] 6082.08 IOPS, 23.76 MiB/s [2024-11-17T08:32:42.221Z] 6089.38 IOPS, 23.79 MiB/s [2024-11-17T08:32:42.221Z] 6095.71 IOPS, 23.81 MiB/s 00:32:37.208 Latency(us) 00:32:37.208 [2024-11-17T08:32:42.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.208 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:37.208 Verification LBA range: start 0x0 length 0x4000 00:32:37.208 NVMe0n1 : 15.01 6099.30 23.83 690.20 0.00 18817.15 743.35 22816.24 00:32:37.208 [2024-11-17T08:32:42.221Z] =================================================================================================================== 00:32:37.208 [2024-11-17T08:32:42.221Z] Total : 6099.30 23.83 690.20 0.00 18817.15 743.35 22816.24 00:32:37.208 Received shutdown signal, test time was about 15.000000 seconds 00:32:37.208 00:32:37.208 Latency(us) 00:32:37.208 [2024-11-17T08:32:42.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.208 [2024-11-17T08:32:42.221Z] =================================================================================================================== 00:32:37.208 [2024-11-17T08:32:42.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3096133 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3096133 /var/tmp/bdevperf.sock 00:32:37.208 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3096133 ']' 00:32:37.209 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:37.209 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.209 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:37.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:37.209 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.209 09:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:38.143 09:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.143 09:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:38.143 09:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:38.401 [2024-11-17 09:32:43.344539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.401 09:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:38.660 [2024-11-17 09:32:43.629452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:38.660 09:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:39.226 NVMe0n1 00:32:39.226 09:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:39.792 00:32:39.792 09:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:40.051 00:32:40.310 09:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:40.310 09:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:40.568 09:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:40.827 09:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:44.108 09:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:44.108 09:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:44.108 09:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3096934 00:32:44.108 09:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:44.108 09:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3096934 00:32:45.043 { 00:32:45.043 "results": [ 00:32:45.043 { 00:32:45.043 "job": "NVMe0n1", 00:32:45.043 "core_mask": "0x1", 00:32:45.043 "workload": "verify", 00:32:45.043 "status": "finished", 00:32:45.043 "verify_range": { 00:32:45.043 "start": 0, 00:32:45.043 "length": 16384 00:32:45.043 }, 00:32:45.043 "queue_depth": 128, 00:32:45.043 "io_size": 4096, 00:32:45.043 "runtime": 1.018466, 00:32:45.043 "iops": 6187.737244051348, 00:32:45.043 "mibps": 24.17084860957558, 00:32:45.043 "io_failed": 0, 00:32:45.043 "io_timeout": 0, 00:32:45.043 "avg_latency_us": 20592.89143528803, 00:32:45.043 "min_latency_us": 4029.2503703703705, 00:32:45.043 "max_latency_us": 18252.98962962963 00:32:45.043 } 00:32:45.043 ], 00:32:45.043 "core_count": 1 00:32:45.043 } 00:32:45.043 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:45.043 [2024-11-17 09:32:42.148302] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:45.043 [2024-11-17 09:32:42.148497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096133 ] 00:32:45.043 [2024-11-17 09:32:42.285432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.043 [2024-11-17 09:32:42.411840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.043 [2024-11-17 09:32:45.587812] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:45.043 [2024-11-17 09:32:45.587966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.043 [2024-11-17 09:32:45.588005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.043 [2024-11-17 09:32:45.588034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.043 [2024-11-17 09:32:45.588056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.043 [2024-11-17 09:32:45.588077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.043 [2024-11-17 09:32:45.588098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.043 [2024-11-17 09:32:45.588119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.043 [2024-11-17 09:32:45.588140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.043 [2024-11-17 09:32:45.588160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:45.043 [2024-11-17 09:32:45.588250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:45.043 [2024-11-17 09:32:45.588306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:32:45.043 [2024-11-17 09:32:45.636206] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:45.043 Running I/O for 1 seconds... 00:32:45.043 6167.00 IOPS, 24.09 MiB/s 00:32:45.043 Latency(us) 00:32:45.043 [2024-11-17T08:32:50.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.043 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:45.043 Verification LBA range: start 0x0 length 0x4000 00:32:45.043 NVMe0n1 : 1.02 6187.74 24.17 0.00 0.00 20592.89 4029.25 18252.99 00:32:45.043 [2024-11-17T08:32:50.056Z] =================================================================================================================== 00:32:45.043 [2024-11-17T08:32:50.056Z] Total : 6187.74 24.17 0.00 0.00 20592.89 4029.25 18252.99 00:32:45.043 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.043 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:45.609 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:45.868 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.868 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:46.126 09:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:46.384 09:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3096133 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3096133 ']' 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3096133 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3096133 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3096133' 00:32:49.665 killing process with pid 3096133 00:32:49.665 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3096133 00:32:49.666 09:32:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3096133 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.600 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.600 rmmod nvme_tcp 00:32:50.858 rmmod nvme_fabrics 00:32:50.858 rmmod nvme_keyring 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3093722 ']' 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3093722 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3093722 ']' 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3093722 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093722 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093722' 00:32:50.858 killing process with pid 3093722 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3093722 00:32:50.858 09:32:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3093722 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.286 09:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:54.189 00:32:54.189 real 0m40.415s 00:32:54.189 user 2m22.321s 00:32:54.189 sys 0m6.306s 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:54.189 ************************************ 00:32:54.189 END TEST nvmf_failover 00:32:54.189 ************************************ 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.189 09:32:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.189 ************************************ 00:32:54.189 START TEST nvmf_host_discovery 00:32:54.189 ************************************ 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:54.189 * Looking for test storage... 00:32:54.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:54.189 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.190 --rc genhtml_branch_coverage=1 00:32:54.190 --rc genhtml_function_coverage=1 00:32:54.190 --rc genhtml_legend=1 00:32:54.190 --rc geninfo_all_blocks=1 00:32:54.190 --rc geninfo_unexecuted_blocks=1 00:32:54.190 00:32:54.190 ' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.190 --rc genhtml_branch_coverage=1 00:32:54.190 --rc genhtml_function_coverage=1 00:32:54.190 --rc genhtml_legend=1 00:32:54.190 --rc geninfo_all_blocks=1 00:32:54.190 --rc geninfo_unexecuted_blocks=1 00:32:54.190 00:32:54.190 ' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.190 --rc genhtml_branch_coverage=1 00:32:54.190 --rc genhtml_function_coverage=1 00:32:54.190 --rc genhtml_legend=1 00:32:54.190 --rc geninfo_all_blocks=1 00:32:54.190 --rc geninfo_unexecuted_blocks=1 00:32:54.190 00:32:54.190 ' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.190 --rc genhtml_branch_coverage=1 00:32:54.190 --rc genhtml_function_coverage=1 00:32:54.190 --rc genhtml_legend=1 00:32:54.190 --rc geninfo_all_blocks=1 00:32:54.190 --rc geninfo_unexecuted_blocks=1 00:32:54.190 00:32:54.190 ' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.190 09:32:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:56.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:56.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:56.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:56.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.721 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:32:56.722 00:32:56.722 --- 10.0.0.2 ping statistics --- 00:32:56.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.722 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:32:56.722 00:32:56.722 --- 10.0.0.1 ping statistics --- 00:32:56.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.722 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3099885 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3099885 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3099885 ']' 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.722 09:33:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 [2024-11-17 09:33:01.457464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:56.722 [2024-11-17 09:33:01.457649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.722 [2024-11-17 09:33:01.611601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.980 [2024-11-17 09:33:01.749407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.980 [2024-11-17 09:33:01.749480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.980 [2024-11-17 09:33:01.749506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.980 [2024-11-17 09:33:01.749529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.980 [2024-11-17 09:33:01.749548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.980 [2024-11-17 09:33:01.751207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.547 [2024-11-17 09:33:02.438057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.547 [2024-11-17 09:33:02.446342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.547 null0 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.547 null1 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3100065 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3100065 /tmp/host.sock 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3100065 ']' 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:57.547 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.547 09:33:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.806 [2024-11-17 09:33:02.562757] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:57.806 [2024-11-17 09:33:02.562898] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100065 ] 00:32:57.806 [2024-11-17 09:33:02.704072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.064 [2024-11-17 09:33:02.840929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.631 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 [2024-11-17 09:33:03.818448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:58.890 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:59.149 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:59.716 [2024-11-17 09:33:04.569925] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:59.716 [2024-11-17 09:33:04.569979] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:59.716 [2024-11-17 09:33:04.570026] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:59.716 [2024-11-17 09:33:04.657322] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:59.974 [2024-11-17 09:33:04.880045] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:59.974 [2024-11-17 09:33:04.881667] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:32:59.974 [2024-11-17 09:33:04.884140] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:59.974 [2024-11-17 09:33:04.884177] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:59.974 [2024-11-17 09:33:04.928789] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.233 09:33:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.233 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.234 [2024-11-17 09:33:05.174270] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.234 [2024-11-17 09:33:05.180035] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.234 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.492 [2024-11-17 09:33:05.264131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:00.492 [2024-11-17 09:33:05.264502] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:00.492 [2024-11-17 09:33:05.264553] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.492 [2024-11-17 09:33:05.351360] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:00.492 09:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:00.751 [2024-11-17 09:33:05.615294] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:00.751 [2024-11-17 09:33:05.615443] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:00.751 [2024-11-17 09:33:05.615475] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:00.751 [2024-11-17 09:33:05.615492] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.687 [2024-11-17 09:33:06.485065] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:01.687 [2024-11-17 09:33:06.485124] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:01.687 [2024-11-17 09:33:06.485307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:01.687 [2024-11-17 09:33:06.485365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.687 [2024-11-17 09:33:06.485400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:01.687 [2024-11-17 09:33:06.485438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.687 [2024-11-17 09:33:06.485460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:01.687 [2024-11-17 09:33:06.485481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.687 [2024-11-17 09:33:06.485502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:01.687 [2024-11-17 09:33:06.485522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:01.687 [2024-11-17 09:33:06.485541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:01.687 [2024-11-17 09:33:06.495284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.687 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.687 [2024-11-17 09:33:06.505323] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.687 [2024-11-17 09:33:06.505386] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.687 [2024-11-17 09:33:06.505416] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.687 [2024-11-17 09:33:06.505430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.687 [2024-11-17 09:33:06.505509] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.687 [2024-11-17 09:33:06.505785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.687 [2024-11-17 09:33:06.505826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.687 [2024-11-17 09:33:06.505852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.687 [2024-11-17 09:33:06.505887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.687 [2024-11-17 09:33:06.505921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.687 [2024-11-17 09:33:06.505944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.687 [2024-11-17 09:33:06.505975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.687 [2024-11-17 09:33:06.505996] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.687 [2024-11-17 09:33:06.506013] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.687 [2024-11-17 09:33:06.506027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.687 [2024-11-17 09:33:06.515549] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.687 [2024-11-17 09:33:06.515582] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.687 [2024-11-17 09:33:06.515597] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.687 [2024-11-17 09:33:06.515609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.688 [2024-11-17 09:33:06.515644] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.515883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.688 [2024-11-17 09:33:06.515921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.688 [2024-11-17 09:33:06.515945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.688 [2024-11-17 09:33:06.515978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.688 [2024-11-17 09:33:06.516008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.688 [2024-11-17 09:33:06.516029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.688 [2024-11-17 09:33:06.516048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.688 [2024-11-17 09:33:06.516067] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.688 [2024-11-17 09:33:06.516081] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.688 [2024-11-17 09:33:06.516093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.688 [2024-11-17 09:33:06.525684] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.688 [2024-11-17 09:33:06.525715] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.688 [2024-11-17 09:33:06.525730] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.525742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.688 [2024-11-17 09:33:06.525790] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.525994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.688 [2024-11-17 09:33:06.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.688 [2024-11-17 09:33:06.526053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.688 [2024-11-17 09:33:06.526085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.688 [2024-11-17 09:33:06.526115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.688 [2024-11-17 09:33:06.526136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.688 [2024-11-17 09:33:06.526155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.688 [2024-11-17 09:33:06.526173] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.688 [2024-11-17 09:33:06.526188] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.688 [2024-11-17 09:33:06.526200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:01.688 [2024-11-17 09:33:06.535830] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.688 [2024-11-17 09:33:06.535866] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.688 [2024-11-17 09:33:06.535881] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.535893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.688 [2024-11-17 09:33:06.535944] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.536103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.688 [2024-11-17 09:33:06.536141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.688 [2024-11-17 09:33:06.536165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.688 [2024-11-17 09:33:06.536198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.688 [2024-11-17 09:33:06.536228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.688 [2024-11-17 09:33:06.536250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.688 [2024-11-17 09:33:06.536270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.688 [2024-11-17 09:33:06.536288] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.688 [2024-11-17 09:33:06.536303] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.688 [2024-11-17 09:33:06.536315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.688 [2024-11-17 09:33:06.545985] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.688 [2024-11-17 09:33:06.546017] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.688 [2024-11-17 09:33:06.546033] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.546045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.688 [2024-11-17 09:33:06.546096] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.546269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.688 [2024-11-17 09:33:06.546306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.688 [2024-11-17 09:33:06.546359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.688 [2024-11-17 09:33:06.546405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.688 [2024-11-17 09:33:06.546436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.688 [2024-11-17 09:33:06.546458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.688 [2024-11-17 09:33:06.546477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.688 [2024-11-17 09:33:06.546495] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.688 [2024-11-17 09:33:06.546510] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.688 [2024-11-17 09:33:06.546522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.688 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.688 [2024-11-17 09:33:06.556137] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.688 [2024-11-17 09:33:06.556169] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.688 [2024-11-17 09:33:06.556184] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.556195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.688 [2024-11-17 09:33:06.556253] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.556475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.688 [2024-11-17 09:33:06.556513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.688 [2024-11-17 09:33:06.556536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.688 [2024-11-17 09:33:06.556569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.688 [2024-11-17 09:33:06.556599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.688 [2024-11-17 09:33:06.556621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.688 [2024-11-17 09:33:06.556639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.688 [2024-11-17 09:33:06.556666] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.688 [2024-11-17 09:33:06.556681] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.688 [2024-11-17 09:33:06.556693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.688 [2024-11-17 09:33:06.566293] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:01.688 [2024-11-17 09:33:06.566324] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:01.688 [2024-11-17 09:33:06.566338] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.688 [2024-11-17 09:33:06.566375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.689 [2024-11-17 09:33:06.566441] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.689 [2024-11-17 09:33:06.566565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:01.689 [2024-11-17 09:33:06.566601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:01.689 [2024-11-17 09:33:06.566624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:01.689 [2024-11-17 09:33:06.566666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:01.689 [2024-11-17 09:33:06.566712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.689 [2024-11-17 09:33:06.566737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.689 [2024-11-17 09:33:06.566756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.689 [2024-11-17 09:33:06.566774] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.689 [2024-11-17 09:33:06.566789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.689 [2024-11-17 09:33:06.566801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:01.689 [2024-11-17 09:33:06.571753] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:01.689 [2024-11-17 09:33:06.571813] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:01.689 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.948 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.949 09:33:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.883 [2024-11-17 09:33:07.849566] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:02.883 [2024-11-17 09:33:07.849604] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:02.883 [2024-11-17 09:33:07.849650] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:03.141 [2024-11-17 09:33:07.936976] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:03.400 [2024-11-17 09:33:08.202890] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:03.400 [2024-11-17 09:33:08.204274] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:03.400 [2024-11-17 09:33:08.207149] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:03.400 [2024-11-17 09:33:08.207215] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.400 request: 00:33:03.400 { 00:33:03.400 "name": "nvme", 00:33:03.400 "trtype": "tcp", 00:33:03.400 "traddr": "10.0.0.2", 00:33:03.400 "adrfam": "ipv4", 00:33:03.400 "trsvcid": "8009", 00:33:03.400 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:03.400 "wait_for_attach": true, 00:33:03.400 "method": "bdev_nvme_start_discovery", 00:33:03.400 "req_id": 1 00:33:03.400 } 00:33:03.400 Got JSON-RPC error response 00:33:03.400 response: 00:33:03.400 { 00:33:03.400 "code": -17, 00:33:03.400 "message": "File exists" 00:33:03.400 } 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.400 [2024-11-17 09:33:08.250752] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.400 request: 00:33:03.400 { 00:33:03.400 "name": "nvme_second", 00:33:03.400 "trtype": "tcp", 00:33:03.400 "traddr": "10.0.0.2", 00:33:03.400 "adrfam": "ipv4", 00:33:03.400 "trsvcid": "8009", 00:33:03.400 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:03.400 "wait_for_attach": true, 00:33:03.400 "method": "bdev_nvme_start_discovery", 00:33:03.400 "req_id": 1 00:33:03.400 } 00:33:03.400 Got JSON-RPC error response 00:33:03.400 response: 00:33:03.400 { 00:33:03.400 "code": -17, 00:33:03.400 "message": "File exists" 00:33:03.400 } 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.400 09:33:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:04.792 [2024-11-17 09:33:09.394949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.792 [2024-11-17 09:33:09.395025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:04.792 [2024-11-17 09:33:09.395101] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:04.792 [2024-11-17 09:33:09.395139] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:04.792 [2024-11-17 09:33:09.395159] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:05.724 [2024-11-17 09:33:10.397384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.724 [2024-11-17 09:33:10.397472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:05.724 [2024-11-17 09:33:10.397541] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:05.724 [2024-11-17 09:33:10.397575] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:05.724 [2024-11-17 09:33:10.397596] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:06.657 [2024-11-17 09:33:11.399468] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:06.657 request: 00:33:06.657 { 00:33:06.657 "name": "nvme_second", 00:33:06.657 "trtype": "tcp", 00:33:06.657 "traddr": "10.0.0.2", 00:33:06.657 "adrfam": "ipv4", 00:33:06.657 "trsvcid": "8010", 00:33:06.657 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:06.657 "wait_for_attach": false, 00:33:06.657 "attach_timeout_ms": 3000, 00:33:06.657 "method": "bdev_nvme_start_discovery", 00:33:06.657 "req_id": 1 00:33:06.657 } 00:33:06.657 Got JSON-RPC error response 00:33:06.657 response: 00:33:06.657 { 00:33:06.657 "code": -110, 00:33:06.657 "message": "Connection timed out" 00:33:06.657 } 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3100065 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.657 rmmod nvme_tcp 00:33:06.657 rmmod nvme_fabrics 00:33:06.657 rmmod nvme_keyring 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3099885 ']' 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3099885 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3099885 ']' 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3099885 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3099885 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3099885' 00:33:06.657 killing process with pid 3099885 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3099885 00:33:06.657 09:33:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3099885 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.034 09:33:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.939 00:33:09.939 real 0m15.710s 00:33:09.939 user 0m23.194s 00:33:09.939 sys 0m3.059s 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.939 ************************************ 00:33:09.939 END TEST nvmf_host_discovery 00:33:09.939 ************************************ 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.939 09:33:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.939 ************************************ 00:33:09.939 START TEST nvmf_host_multipath_status 00:33:09.939 ************************************ 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:09.940 * Looking for test storage... 00:33:09.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.940 --rc genhtml_branch_coverage=1 00:33:09.940 --rc genhtml_function_coverage=1 00:33:09.940 --rc genhtml_legend=1 00:33:09.940 --rc geninfo_all_blocks=1 00:33:09.940 --rc geninfo_unexecuted_blocks=1 00:33:09.940 00:33:09.940 ' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.940 --rc genhtml_branch_coverage=1 00:33:09.940 --rc genhtml_function_coverage=1 00:33:09.940 --rc genhtml_legend=1 00:33:09.940 --rc geninfo_all_blocks=1 00:33:09.940 --rc geninfo_unexecuted_blocks=1 00:33:09.940 00:33:09.940 ' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.940 --rc genhtml_branch_coverage=1 00:33:09.940 --rc genhtml_function_coverage=1 00:33:09.940 --rc genhtml_legend=1 00:33:09.940 --rc geninfo_all_blocks=1 00:33:09.940 --rc geninfo_unexecuted_blocks=1 00:33:09.940 00:33:09.940 ' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.940 --rc genhtml_branch_coverage=1 00:33:09.940 --rc genhtml_function_coverage=1 00:33:09.940 --rc genhtml_legend=1 00:33:09.940 --rc geninfo_all_blocks=1 00:33:09.940 --rc geninfo_unexecuted_blocks=1 00:33:09.940 00:33:09.940 ' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.940 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.941 09:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.472 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:12.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:12.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:12.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:12.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.473 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:33:12.473 00:33:12.473 --- 10.0.0.2 ping statistics --- 00:33:12.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.473 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:33:12.473 00:33:12.473 --- 10.0.0.1 ping statistics --- 00:33:12.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.473 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3103864 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3103864 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3103864 ']' 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.473 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:12.473 [2024-11-17 09:33:17.189104] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:12.474 [2024-11-17 09:33:17.189232] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.474 [2024-11-17 09:33:17.331781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:12.474 [2024-11-17 09:33:17.451252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.474 [2024-11-17 09:33:17.451338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.474 [2024-11-17 09:33:17.451388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.474 [2024-11-17 09:33:17.451412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.474 [2024-11-17 09:33:17.451429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.474 [2024-11-17 09:33:17.453792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.474 [2024-11-17 09:33:17.453796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3103864 00:33:13.408 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:13.666 [2024-11-17 09:33:18.516881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.666 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:13.925 Malloc0 00:33:13.925 09:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:14.490 09:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.749 09:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.006 [2024-11-17 09:33:19.820238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.006 09:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:15.264 [2024-11-17 09:33:20.105176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3104262 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3104262 /var/tmp/bdevperf.sock 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3104262 ']' 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:15.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.264 09:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:16.196 09:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.196 09:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:16.196 09:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:16.453 09:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:17.020 Nvme0n1 00:33:17.020 09:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:17.585 Nvme0n1 00:33:17.585 09:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:17.585 09:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:19.487 09:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:19.487 09:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:19.746 09:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:20.004 09:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:21.379 09:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:21.379 09:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:21.379 09:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.379 09:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.379 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.379 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:21.379 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.379 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.637 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.637 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.637 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.637 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.895 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.895 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.895 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.895 09:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.154 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.154 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:22.154 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.154 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.412 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.412 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:22.412 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.412 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.670 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.670 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:22.670 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:23.237 09:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:23.237 09:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.613 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:24.872 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.872 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.872 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.872 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:25.130 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.130 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:25.130 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.130 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:25.390 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.390 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:25.390 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.390 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:25.683 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.683 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:25.683 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.683 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:25.965 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.965 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:25.965 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:26.223 09:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:26.481 09:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.855 09:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:28.113 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.113 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:28.113 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.113 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:28.372 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.372 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:28.372 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.372 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:28.630 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.630 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:28.630 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.630 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:28.889 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.889 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:28.889 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.889 09:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:29.147 09:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.147 09:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:29.147 09:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:29.406 09:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:29.972 09:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:30.907 09:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:30.907 09:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:30.907 09:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.907 09:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:31.164 09:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.164 09:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:31.164 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.164 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:31.422 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.422 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:31.422 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.422 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:31.680 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.680 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:31.680 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.680 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:31.939 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.939 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:31.939 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.939 09:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:32.197 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.197 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:32.197 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.197 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:32.455 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.455 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:32.455 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:32.713 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:32.971 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:34.345 09:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:34.345 09:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:34.345 09:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.345 09:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:34.345 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:34.345 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:34.345 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.345 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:34.603 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:34.603 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:34.603 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.603 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:34.860 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.860 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:34.860 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.860 09:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:35.118 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.118 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:35.118 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:35.118 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.376 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.376 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:35.376 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.376 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:35.634 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.634 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:35.634 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:35.892 09:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:36.150 09:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.524 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:37.782 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.782 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:37.782 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:37.782 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.040 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.040 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:38.040 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.040 09:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:38.297 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.297 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:38.297 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.298 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:38.555 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.555 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:38.555 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.555 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:38.814 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.814 09:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:39.072 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:39.072 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:39.330 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:39.588 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.962 09:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.221 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.221 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.221 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.221 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.479 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.480 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.480 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.480 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.738 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.738 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:41.738 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.738 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.997 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.997 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:41.997 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.997 09:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:42.254 09:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.255 09:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:42.255 09:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:42.821 09:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:42.821 09:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:44.195 09:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:44.195 09:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:44.195 09:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.195 09:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:44.195 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.195 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:44.195 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.195 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:44.453 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.453 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:44.453 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.453 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:44.711 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.711 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:44.711 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.711 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.968 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.968 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:44.968 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.968 09:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:45.226 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.226 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:45.226 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.226 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:45.793 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.793 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:45.793 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:46.051 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:46.309 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:47.245 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:47.245 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:47.245 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.245 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:47.503 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.503 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:47.503 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.503 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:47.762 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.762 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:47.762 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.762 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.021 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.021 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.021 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.021 09:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.280 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.280 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:48.280 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.280 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:48.538 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.538 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:48.538 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.538 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:48.796 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.796 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:48.796 09:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:49.055 09:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:49.314 09:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.689 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:50.947 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.947 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:50.947 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.947 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:51.205 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.205 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.205 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.205 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:51.464 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.464 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:51.464 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.464 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.722 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.722 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:51.722 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.722 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3104262 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3104262 ']' 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3104262 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:51.980 09:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104262 00:33:52.239 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:52.239 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:52.239 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104262' 00:33:52.239 killing process with pid 3104262 00:33:52.239 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3104262 00:33:52.239 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3104262 00:33:52.239 { 00:33:52.239 "results": [ 00:33:52.239 { 00:33:52.239 "job": "Nvme0n1", 00:33:52.239 "core_mask": "0x4", 00:33:52.239 "workload": "verify", 00:33:52.239 "status": "terminated", 00:33:52.239 "verify_range": { 00:33:52.239 "start": 0, 00:33:52.239 "length": 16384 00:33:52.239 }, 00:33:52.239 "queue_depth": 128, 00:33:52.239 "io_size": 4096, 00:33:52.239 "runtime": 34.439923, 00:33:52.239 "iops": 5893.741400060621, 00:33:52.239 "mibps": 23.022427343986802, 00:33:52.239 "io_failed": 0, 00:33:52.239 "io_timeout": 0, 00:33:52.239 "avg_latency_us": 21679.88912159928, 00:33:52.239 "min_latency_us": 219.97037037037038, 00:33:52.239 "max_latency_us": 4051386.974814815 00:33:52.239 } 00:33:52.239 ], 00:33:52.239 "core_count": 1 00:33:52.239 } 00:33:53.176 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3104262 00:33:53.176 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:53.176 [2024-11-17 09:33:20.212740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:53.176 [2024-11-17 09:33:20.212918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104262 ] 00:33:53.176 [2024-11-17 09:33:20.352316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.176 [2024-11-17 09:33:20.476826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.176 Running I/O for 90 seconds... 00:33:53.176 6200.00 IOPS, 24.22 MiB/s [2024-11-17T08:33:58.189Z] 6326.00 IOPS, 24.71 MiB/s [2024-11-17T08:33:58.189Z] 6285.67 IOPS, 24.55 MiB/s [2024-11-17T08:33:58.189Z] 6261.25 IOPS, 24.46 MiB/s [2024-11-17T08:33:58.189Z] 6247.80 IOPS, 24.41 MiB/s [2024-11-17T08:33:58.189Z] 6249.83 IOPS, 24.41 MiB/s [2024-11-17T08:33:58.189Z] 6259.43 IOPS, 24.45 MiB/s [2024-11-17T08:33:58.189Z] 6245.25 IOPS, 24.40 MiB/s [2024-11-17T08:33:58.189Z] 6240.11 IOPS, 24.38 MiB/s [2024-11-17T08:33:58.189Z] 6258.30 IOPS, 24.45 MiB/s [2024-11-17T08:33:58.189Z] 6260.73 IOPS, 24.46 MiB/s [2024-11-17T08:33:58.189Z] 6262.67 IOPS, 24.46 MiB/s [2024-11-17T08:33:58.189Z] 6267.00 IOPS, 24.48 MiB/s [2024-11-17T08:33:58.189Z] 6261.14 IOPS, 24.46 MiB/s [2024-11-17T08:33:58.189Z] 6257.07 IOPS, 24.44 MiB/s [2024-11-17T08:33:58.189Z] [2024-11-17 09:33:37.632412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.176 [2024-11-17 09:33:37.632492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.176 [2024-11-17 09:33:37.632587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.176 [2024-11-17 09:33:37.632618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:53.176 [2024-11-17 09:33:37.632676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.176 [2024-11-17 09:33:37.632702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.632755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.632779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.632813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.632837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.632870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.632895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.632929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.632952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.632986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.633010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.634880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.634913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.634982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.635960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.635984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.636046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.177 [2024-11-17 09:33:37.636104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.636958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:53.177 [2024-11-17 09:33:37.636993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.177 [2024-11-17 09:33:37.637017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.637962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.637998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.638956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.638993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.178 [2024-11-17 09:33:37.639660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:53.178 [2024-11-17 09:33:37.639712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.639737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.639773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.639797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.639833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.639856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.639892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.639921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.639958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.639983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.640974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.640998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.641058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.641116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.641176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.641235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.641294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.641354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.641439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.641507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.641569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.641918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.641966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.641993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.179 [2024-11-17 09:33:37.642076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.642144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.642224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.642290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.642378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.642448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:53.179 [2024-11-17 09:33:37.642490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.179 [2024-11-17 09:33:37.642516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.642582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.642674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.642741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.642805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.642870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.642935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.642975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.643000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.643040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.643064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.643105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.643129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:37.643170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:37.643207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:53.180 5890.44 IOPS, 23.01 MiB/s [2024-11-17T08:33:58.193Z] 5543.94 IOPS, 21.66 MiB/s [2024-11-17T08:33:58.193Z] 5235.94 IOPS, 20.45 MiB/s [2024-11-17T08:33:58.193Z] 4960.37 IOPS, 19.38 MiB/s [2024-11-17T08:33:58.193Z] 5002.75 IOPS, 19.54 MiB/s [2024-11-17T08:33:58.193Z] 5057.52 IOPS, 19.76 MiB/s [2024-11-17T08:33:58.193Z] 5129.09 IOPS, 20.04 MiB/s [2024-11-17T08:33:58.193Z] 5271.04 IOPS, 20.59 MiB/s [2024-11-17T08:33:58.193Z] 5399.88 IOPS, 21.09 MiB/s [2024-11-17T08:33:58.193Z] 5509.44 IOPS, 21.52 MiB/s [2024-11-17T08:33:58.193Z] 5540.31 IOPS, 21.64 MiB/s [2024-11-17T08:33:58.193Z] 5569.56 IOPS, 21.76 MiB/s [2024-11-17T08:33:58.193Z] 5588.18 IOPS, 21.83 MiB/s [2024-11-17T08:33:58.193Z] 5642.31 IOPS, 22.04 MiB/s [2024-11-17T08:33:58.193Z] 5736.77 IOPS, 22.41 MiB/s [2024-11-17T08:33:58.193Z] 5817.58 IOPS, 22.72 MiB/s [2024-11-17T08:33:58.193Z] [2024-11-17 09:33:54.296260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.296375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.296884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.296929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.296975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.297313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.297407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.297973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.298060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.180 [2024-11-17 09:33:54.298123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.298185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.298248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.298310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.298397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.298465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:53.180 [2024-11-17 09:33:54.298503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.180 [2024-11-17 09:33:54.298529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.298576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.298603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.298641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.298667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.298721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.298747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.300809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.300846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.300891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.300919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.300957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.300983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.301046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.301108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.301170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.301675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.301741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.181 [2024-11-17 09:33:54.301805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.301845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.301871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:53.181 [2024-11-17 09:33:54.303677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.181 [2024-11-17 09:33:54.303719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.303758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.303784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.303822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.303848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.303885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.303911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.303948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.303974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.304012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.304037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.304075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.304101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:53.182 [2024-11-17 09:33:54.304139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.182 [2024-11-17 09:33:54.304165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:53.182 5871.97 IOPS, 22.94 MiB/s [2024-11-17T08:33:58.195Z] 5884.91 IOPS, 22.99 MiB/s [2024-11-17T08:33:58.195Z] 5894.35 IOPS, 23.02 MiB/s [2024-11-17T08:33:58.195Z] Received shutdown signal, test time was about 34.440746 seconds 00:33:53.182 00:33:53.182 Latency(us) 00:33:53.182 [2024-11-17T08:33:58.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.182 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:53.182 Verification LBA range: start 0x0 length 0x4000 00:33:53.182 Nvme0n1 : 34.44 5893.74 23.02 0.00 0.00 21679.89 219.97 4051386.97 00:33:53.182 [2024-11-17T08:33:58.195Z] =================================================================================================================== 00:33:53.182 [2024-11-17T08:33:58.195Z] Total : 5893.74 23.02 0.00 0.00 21679.89 219.97 4051386.97 00:33:53.182 09:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.182 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.182 rmmod nvme_tcp 00:33:53.182 rmmod nvme_fabrics 00:33:53.182 rmmod nvme_keyring 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3103864 ']' 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3103864 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3103864 ']' 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3103864 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3103864 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3103864' 00:33:53.440 killing process with pid 3103864 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3103864 00:33:53.440 09:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3103864 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.835 09:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.790 00:33:56.790 real 0m46.819s 00:33:56.790 user 2m19.980s 00:33:56.790 sys 0m11.083s 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:56.790 ************************************ 00:33:56.790 END TEST nvmf_host_multipath_status 00:33:56.790 ************************************ 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.790 ************************************ 00:33:56.790 START TEST nvmf_discovery_remove_ifc 00:33:56.790 ************************************ 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:56.790 * Looking for test storage... 00:33:56.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:56.790 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.791 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:57.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.050 --rc genhtml_branch_coverage=1 00:33:57.050 --rc genhtml_function_coverage=1 00:33:57.050 --rc genhtml_legend=1 00:33:57.050 --rc geninfo_all_blocks=1 00:33:57.050 --rc geninfo_unexecuted_blocks=1 00:33:57.050 00:33:57.050 ' 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:57.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.050 --rc genhtml_branch_coverage=1 00:33:57.050 --rc genhtml_function_coverage=1 00:33:57.050 --rc genhtml_legend=1 00:33:57.050 --rc geninfo_all_blocks=1 00:33:57.050 --rc geninfo_unexecuted_blocks=1 00:33:57.050 00:33:57.050 ' 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:57.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.050 --rc genhtml_branch_coverage=1 00:33:57.050 --rc genhtml_function_coverage=1 00:33:57.050 --rc genhtml_legend=1 00:33:57.050 --rc geninfo_all_blocks=1 00:33:57.050 --rc geninfo_unexecuted_blocks=1 00:33:57.050 00:33:57.050 ' 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:57.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.050 --rc genhtml_branch_coverage=1 00:33:57.050 --rc genhtml_function_coverage=1 00:33:57.050 --rc genhtml_legend=1 00:33:57.050 --rc geninfo_all_blocks=1 00:33:57.050 --rc geninfo_unexecuted_blocks=1 00:33:57.050 00:33:57.050 ' 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.050 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:57.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.051 09:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:58.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.954 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:58.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:58.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:58.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:33:58.955 00:33:58.955 --- 10.0.0.2 ping statistics --- 00:33:58.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.955 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:33:58.955 00:33:58.955 --- 10.0.0.1 ping statistics --- 00:33:58.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.955 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3110896 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3110896 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3110896 ']' 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.955 09:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 [2024-11-17 09:34:04.038947] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:59.214 [2024-11-17 09:34:04.039107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.214 [2024-11-17 09:34:04.191520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.472 [2024-11-17 09:34:04.328747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.472 [2024-11-17 09:34:04.328826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.472 [2024-11-17 09:34:04.328853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.473 [2024-11-17 09:34:04.328879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.473 [2024-11-17 09:34:04.328910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.473 [2024-11-17 09:34:04.330530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.039 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.039 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:00.039 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:00.039 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.039 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.297 [2024-11-17 09:34:05.078613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.297 [2024-11-17 09:34:05.086892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:00.297 null0 00:34:00.297 [2024-11-17 09:34:05.118733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3111049 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3111049 /tmp/host.sock 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3111049 ']' 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:00.297 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.297 09:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.297 [2024-11-17 09:34:05.229953] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:00.297 [2024-11-17 09:34:05.230101] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111049 ] 00:34:00.555 [2024-11-17 09:34:05.370121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.555 [2024-11-17 09:34:05.492330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.490 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:01.748 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.748 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:01.748 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.748 09:34:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:02.682 [2024-11-17 09:34:07.640925] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:02.682 [2024-11-17 09:34:07.640980] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:02.682 [2024-11-17 09:34:07.641020] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:02.940 [2024-11-17 09:34:07.727340] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:02.940 [2024-11-17 09:34:07.908913] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:02.940 [2024-11-17 09:34:07.910700] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:02.940 [2024-11-17 09:34:07.912849] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:02.940 [2024-11-17 09:34:07.912940] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:02.940 [2024-11-17 09:34:07.913025] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:02.940 [2024-11-17 09:34:07.913065] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:02.940 [2024-11-17 09:34:07.913123] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:02.940 [2024-11-17 09:34:07.920115] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:02.940 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.230 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:03.230 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:03.230 09:34:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:03.230 09:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:04.163 09:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:05.095 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.096 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.096 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.096 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.096 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.096 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.096 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.353 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.353 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:05.353 09:34:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:06.287 09:34:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:07.222 09:34:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:08.596 09:34:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:08.596 [2024-11-17 09:34:13.354104] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:08.596 [2024-11-17 09:34:13.354209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.596 [2024-11-17 09:34:13.354245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.596 [2024-11-17 09:34:13.354277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.596 [2024-11-17 09:34:13.354300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.596 [2024-11-17 09:34:13.354324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.596 [2024-11-17 09:34:13.354347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.596 [2024-11-17 09:34:13.354381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.596 [2024-11-17 09:34:13.354428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.596 [2024-11-17 09:34:13.354450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.596 [2024-11-17 09:34:13.354470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.596 [2024-11-17 09:34:13.354489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:08.596 [2024-11-17 09:34:13.364116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:08.596 [2024-11-17 09:34:13.374165] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:08.596 [2024-11-17 09:34:13.374208] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:08.596 [2024-11-17 09:34:13.374229] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:08.596 [2024-11-17 09:34:13.374246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:08.596 [2024-11-17 09:34:13.374316] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.529 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.529 [2024-11-17 09:34:14.394420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:09.529 [2024-11-17 09:34:14.394522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:34:09.529 [2024-11-17 09:34:14.394562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:09.529 [2024-11-17 09:34:14.394638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:09.529 [2024-11-17 09:34:14.395434] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:09.529 [2024-11-17 09:34:14.395499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.529 [2024-11-17 09:34:14.395532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.529 [2024-11-17 09:34:14.395555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.530 [2024-11-17 09:34:14.395575] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.530 [2024-11-17 09:34:14.395593] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.530 [2024-11-17 09:34:14.395607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.530 [2024-11-17 09:34:14.395628] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.530 [2024-11-17 09:34:14.395642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.530 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.530 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:09.530 09:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:10.464 [2024-11-17 09:34:15.398183] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:10.464 [2024-11-17 09:34:15.398232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:10.464 [2024-11-17 09:34:15.398263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:10.464 [2024-11-17 09:34:15.398285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:10.464 [2024-11-17 09:34:15.398306] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:10.464 [2024-11-17 09:34:15.398328] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:10.464 [2024-11-17 09:34:15.398345] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:10.464 [2024-11-17 09:34:15.398359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:10.464 [2024-11-17 09:34:15.398470] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:10.464 [2024-11-17 09:34:15.398542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.464 [2024-11-17 09:34:15.398585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-11-17 09:34:15.398613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.464 [2024-11-17 09:34:15.398633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-11-17 09:34:15.398671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.464 [2024-11-17 09:34:15.398694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-11-17 09:34:15.398717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.464 [2024-11-17 09:34:15.398746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-11-17 09:34:15.398770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.464 [2024-11-17 09:34:15.398792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-11-17 09:34:15.398813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:10.464 [2024-11-17 09:34:15.398901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:10.464 [2024-11-17 09:34:15.399883] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:10.464 [2024-11-17 09:34:15.399920] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.464 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:10.722 09:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:11.656 09:34:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:12.589 [2024-11-17 09:34:17.458557] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:12.589 [2024-11-17 09:34:17.458599] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:12.589 [2024-11-17 09:34:17.458643] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:12.589 [2024-11-17 09:34:17.544968] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.589 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.847 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:12.847 09:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:12.847 [2024-11-17 09:34:17.727461] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:12.847 [2024-11-17 09:34:17.729166] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:34:12.847 [2024-11-17 09:34:17.731643] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:12.847 [2024-11-17 09:34:17.731738] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:12.847 [2024-11-17 09:34:17.731823] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:12.847 [2024-11-17 09:34:17.731866] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:12.847 [2024-11-17 09:34:17.731896] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:12.847 [2024-11-17 09:34:17.737085] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3111049 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3111049 ']' 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3111049 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111049 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111049' 00:34:13.781 killing process with pid 3111049 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3111049 00:34:13.781 09:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3111049 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.713 rmmod nvme_tcp 00:34:14.713 rmmod nvme_fabrics 00:34:14.713 rmmod nvme_keyring 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3110896 ']' 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3110896 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3110896 ']' 00:34:14.713 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3110896 00:34:14.714 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:14.714 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.714 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3110896 00:34:14.972 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:14.972 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:14.972 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3110896' 00:34:14.972 killing process with pid 3110896 00:34:14.972 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3110896 00:34:14.972 09:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3110896 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.906 09:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:18.457 00:34:18.457 real 0m21.278s 00:34:18.457 user 0m31.353s 00:34:18.457 sys 0m3.322s 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.457 ************************************ 00:34:18.457 END TEST nvmf_discovery_remove_ifc 00:34:18.457 ************************************ 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.457 ************************************ 00:34:18.457 START TEST nvmf_identify_kernel_target 00:34:18.457 ************************************ 00:34:18.457 09:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:18.457 * Looking for test storage... 00:34:18.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.457 --rc genhtml_branch_coverage=1 00:34:18.457 --rc genhtml_function_coverage=1 00:34:18.457 --rc genhtml_legend=1 00:34:18.457 --rc geninfo_all_blocks=1 00:34:18.457 --rc geninfo_unexecuted_blocks=1 00:34:18.457 00:34:18.457 ' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.457 --rc genhtml_branch_coverage=1 00:34:18.457 --rc genhtml_function_coverage=1 00:34:18.457 --rc genhtml_legend=1 00:34:18.457 --rc geninfo_all_blocks=1 00:34:18.457 --rc geninfo_unexecuted_blocks=1 00:34:18.457 00:34:18.457 ' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.457 --rc genhtml_branch_coverage=1 00:34:18.457 --rc genhtml_function_coverage=1 00:34:18.457 --rc genhtml_legend=1 00:34:18.457 --rc geninfo_all_blocks=1 00:34:18.457 --rc geninfo_unexecuted_blocks=1 00:34:18.457 00:34:18.457 ' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:18.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.457 --rc genhtml_branch_coverage=1 00:34:18.457 --rc genhtml_function_coverage=1 00:34:18.457 --rc genhtml_legend=1 00:34:18.457 --rc geninfo_all_blocks=1 00:34:18.457 --rc geninfo_unexecuted_blocks=1 00:34:18.457 00:34:18.457 ' 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.457 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:18.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.458 09:34:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.360 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.360 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.360 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:20.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:20.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:20.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:20.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.361 09:34:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:34:20.361 00:34:20.361 --- 10.0.0.2 ping statistics --- 00:34:20.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.361 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:34:20.361 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:34:20.362 00:34:20.362 --- 10.0.0.1 ping statistics --- 00:34:20.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.362 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:20.362 09:34:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:21.297 Waiting for block devices as requested 00:34:21.297 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:21.555 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:21.555 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:21.555 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:21.813 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:21.813 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:21.813 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:21.814 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:22.073 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:22.073 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:22.073 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:22.073 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:22.331 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:22.331 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:22.331 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:22.331 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:22.589 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:22.589 No valid GPT data, bailing 00:34:22.589 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:22.847 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:22.847 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:22.847 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:22.847 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:22.847 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:22.848 00:34:22.848 Discovery Log Number of Records 2, Generation counter 2 00:34:22.848 =====Discovery Log Entry 0====== 00:34:22.848 trtype: tcp 00:34:22.848 adrfam: ipv4 00:34:22.848 subtype: current discovery subsystem 00:34:22.848 treq: not specified, sq flow control disable supported 00:34:22.848 portid: 1 00:34:22.848 trsvcid: 4420 00:34:22.848 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:22.848 traddr: 10.0.0.1 00:34:22.848 eflags: none 00:34:22.848 sectype: none 00:34:22.848 =====Discovery Log Entry 1====== 00:34:22.848 trtype: tcp 00:34:22.848 adrfam: ipv4 00:34:22.848 subtype: nvme subsystem 00:34:22.848 treq: not specified, sq flow control disable supported 00:34:22.848 portid: 1 00:34:22.848 trsvcid: 4420 00:34:22.848 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:22.848 traddr: 10.0.0.1 00:34:22.848 eflags: none 00:34:22.848 sectype: none 00:34:22.848 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:22.848 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:23.107 ===================================================== 00:34:23.107 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:23.107 ===================================================== 00:34:23.107 Controller Capabilities/Features 00:34:23.107 ================================ 00:34:23.107 Vendor ID: 0000 00:34:23.107 Subsystem Vendor ID: 0000 00:34:23.107 Serial Number: 23f2e7b6588c100456c2 00:34:23.107 Model Number: Linux 00:34:23.107 Firmware Version: 6.8.9-20 00:34:23.107 Recommended Arb Burst: 0 00:34:23.107 IEEE OUI Identifier: 00 00 00 00:34:23.107 Multi-path I/O 00:34:23.107 May have multiple subsystem ports: No 00:34:23.107 May have multiple controllers: No 00:34:23.107 Associated with SR-IOV VF: No 00:34:23.107 Max Data Transfer Size: Unlimited 00:34:23.108 Max Number of Namespaces: 0 00:34:23.108 Max Number of I/O Queues: 1024 00:34:23.108 NVMe Specification Version (VS): 1.3 00:34:23.108 NVMe Specification Version (Identify): 1.3 00:34:23.108 Maximum Queue Entries: 1024 00:34:23.108 Contiguous Queues Required: No 00:34:23.108 Arbitration Mechanisms Supported 00:34:23.108 Weighted Round Robin: Not Supported 00:34:23.108 Vendor Specific: Not Supported 00:34:23.108 Reset Timeout: 7500 ms 00:34:23.108 Doorbell Stride: 4 bytes 00:34:23.108 NVM Subsystem Reset: Not Supported 00:34:23.108 Command Sets Supported 00:34:23.108 NVM Command Set: Supported 00:34:23.108 Boot Partition: Not Supported 00:34:23.108 Memory Page Size Minimum: 4096 bytes 00:34:23.108 Memory Page Size Maximum: 4096 bytes 00:34:23.108 Persistent Memory Region: Not Supported 00:34:23.108 Optional Asynchronous Events Supported 00:34:23.108 Namespace Attribute Notices: Not Supported 00:34:23.108 Firmware Activation Notices: Not Supported 00:34:23.108 ANA Change Notices: Not Supported 00:34:23.108 PLE Aggregate Log Change Notices: Not Supported 00:34:23.108 LBA Status Info Alert Notices: Not Supported 00:34:23.108 EGE Aggregate Log Change Notices: Not Supported 00:34:23.108 Normal NVM Subsystem Shutdown event: Not Supported 00:34:23.108 Zone Descriptor Change Notices: Not Supported 00:34:23.108 Discovery Log Change Notices: Supported 00:34:23.108 Controller Attributes 00:34:23.108 128-bit Host Identifier: Not Supported 00:34:23.108 Non-Operational Permissive Mode: Not Supported 00:34:23.108 NVM Sets: Not Supported 00:34:23.108 Read Recovery Levels: Not Supported 00:34:23.108 Endurance Groups: Not Supported 00:34:23.108 Predictable Latency Mode: Not Supported 00:34:23.108 Traffic Based Keep ALive: Not Supported 00:34:23.108 Namespace Granularity: Not Supported 00:34:23.108 SQ Associations: Not Supported 00:34:23.108 UUID List: Not Supported 00:34:23.108 Multi-Domain Subsystem: Not Supported 00:34:23.108 Fixed Capacity Management: Not Supported 00:34:23.108 Variable Capacity Management: Not Supported 00:34:23.108 Delete Endurance Group: Not Supported 00:34:23.108 Delete NVM Set: Not Supported 00:34:23.108 Extended LBA Formats Supported: Not Supported 00:34:23.108 Flexible Data Placement Supported: Not Supported 00:34:23.108 00:34:23.108 Controller Memory Buffer Support 00:34:23.108 ================================ 00:34:23.108 Supported: No 00:34:23.108 00:34:23.108 Persistent Memory Region Support 00:34:23.108 ================================ 00:34:23.108 Supported: No 00:34:23.108 00:34:23.108 Admin Command Set Attributes 00:34:23.108 ============================ 00:34:23.108 Security Send/Receive: Not Supported 00:34:23.108 Format NVM: Not Supported 00:34:23.108 Firmware Activate/Download: Not Supported 00:34:23.108 Namespace Management: Not Supported 00:34:23.108 Device Self-Test: Not Supported 00:34:23.108 Directives: Not Supported 00:34:23.108 NVMe-MI: Not Supported 00:34:23.108 Virtualization Management: Not Supported 00:34:23.108 Doorbell Buffer Config: Not Supported 00:34:23.108 Get LBA Status Capability: Not Supported 00:34:23.108 Command & Feature Lockdown Capability: Not Supported 00:34:23.108 Abort Command Limit: 1 00:34:23.108 Async Event Request Limit: 1 00:34:23.108 Number of Firmware Slots: N/A 00:34:23.108 Firmware Slot 1 Read-Only: N/A 00:34:23.108 Firmware Activation Without Reset: N/A 00:34:23.108 Multiple Update Detection Support: N/A 00:34:23.108 Firmware Update Granularity: No Information Provided 00:34:23.108 Per-Namespace SMART Log: No 00:34:23.108 Asymmetric Namespace Access Log Page: Not Supported 00:34:23.108 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:23.108 Command Effects Log Page: Not Supported 00:34:23.108 Get Log Page Extended Data: Supported 00:34:23.108 Telemetry Log Pages: Not Supported 00:34:23.108 Persistent Event Log Pages: Not Supported 00:34:23.108 Supported Log Pages Log Page: May Support 00:34:23.108 Commands Supported & Effects Log Page: Not Supported 00:34:23.108 Feature Identifiers & Effects Log Page:May Support 00:34:23.108 NVMe-MI Commands & Effects Log Page: May Support 00:34:23.108 Data Area 4 for Telemetry Log: Not Supported 00:34:23.108 Error Log Page Entries Supported: 1 00:34:23.108 Keep Alive: Not Supported 00:34:23.108 00:34:23.108 NVM Command Set Attributes 00:34:23.108 ========================== 00:34:23.108 Submission Queue Entry Size 00:34:23.108 Max: 1 00:34:23.108 Min: 1 00:34:23.108 Completion Queue Entry Size 00:34:23.108 Max: 1 00:34:23.108 Min: 1 00:34:23.108 Number of Namespaces: 0 00:34:23.108 Compare Command: Not Supported 00:34:23.108 Write Uncorrectable Command: Not Supported 00:34:23.108 Dataset Management Command: Not Supported 00:34:23.108 Write Zeroes Command: Not Supported 00:34:23.108 Set Features Save Field: Not Supported 00:34:23.108 Reservations: Not Supported 00:34:23.108 Timestamp: Not Supported 00:34:23.108 Copy: Not Supported 00:34:23.108 Volatile Write Cache: Not Present 00:34:23.108 Atomic Write Unit (Normal): 1 00:34:23.108 Atomic Write Unit (PFail): 1 00:34:23.108 Atomic Compare & Write Unit: 1 00:34:23.108 Fused Compare & Write: Not Supported 00:34:23.108 Scatter-Gather List 00:34:23.108 SGL Command Set: Supported 00:34:23.108 SGL Keyed: Not Supported 00:34:23.108 SGL Bit Bucket Descriptor: Not Supported 00:34:23.108 SGL Metadata Pointer: Not Supported 00:34:23.108 Oversized SGL: Not Supported 00:34:23.108 SGL Metadata Address: Not Supported 00:34:23.108 SGL Offset: Supported 00:34:23.108 Transport SGL Data Block: Not Supported 00:34:23.108 Replay Protected Memory Block: Not Supported 00:34:23.108 00:34:23.108 Firmware Slot Information 00:34:23.108 ========================= 00:34:23.108 Active slot: 0 00:34:23.108 00:34:23.108 00:34:23.108 Error Log 00:34:23.108 ========= 00:34:23.108 00:34:23.108 Active Namespaces 00:34:23.108 ================= 00:34:23.108 Discovery Log Page 00:34:23.108 ================== 00:34:23.108 Generation Counter: 2 00:34:23.108 Number of Records: 2 00:34:23.108 Record Format: 0 00:34:23.108 00:34:23.108 Discovery Log Entry 0 00:34:23.108 ---------------------- 00:34:23.108 Transport Type: 3 (TCP) 00:34:23.108 Address Family: 1 (IPv4) 00:34:23.108 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:23.108 Entry Flags: 00:34:23.108 Duplicate Returned Information: 0 00:34:23.108 Explicit Persistent Connection Support for Discovery: 0 00:34:23.108 Transport Requirements: 00:34:23.108 Secure Channel: Not Specified 00:34:23.108 Port ID: 1 (0x0001) 00:34:23.108 Controller ID: 65535 (0xffff) 00:34:23.108 Admin Max SQ Size: 32 00:34:23.108 Transport Service Identifier: 4420 00:34:23.108 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:23.108 Transport Address: 10.0.0.1 00:34:23.108 Discovery Log Entry 1 00:34:23.108 ---------------------- 00:34:23.108 Transport Type: 3 (TCP) 00:34:23.108 Address Family: 1 (IPv4) 00:34:23.108 Subsystem Type: 2 (NVM Subsystem) 00:34:23.108 Entry Flags: 00:34:23.108 Duplicate Returned Information: 0 00:34:23.108 Explicit Persistent Connection Support for Discovery: 0 00:34:23.108 Transport Requirements: 00:34:23.108 Secure Channel: Not Specified 00:34:23.108 Port ID: 1 (0x0001) 00:34:23.108 Controller ID: 65535 (0xffff) 00:34:23.108 Admin Max SQ Size: 32 00:34:23.108 Transport Service Identifier: 4420 00:34:23.108 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:23.108 Transport Address: 10.0.0.1 00:34:23.108 09:34:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:23.108 get_feature(0x01) failed 00:34:23.108 get_feature(0x02) failed 00:34:23.108 get_feature(0x04) failed 00:34:23.108 ===================================================== 00:34:23.108 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:23.108 ===================================================== 00:34:23.108 Controller Capabilities/Features 00:34:23.108 ================================ 00:34:23.108 Vendor ID: 0000 00:34:23.108 Subsystem Vendor ID: 0000 00:34:23.108 Serial Number: 100351bee41b30c9c6d0 00:34:23.108 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:23.108 Firmware Version: 6.8.9-20 00:34:23.108 Recommended Arb Burst: 6 00:34:23.108 IEEE OUI Identifier: 00 00 00 00:34:23.108 Multi-path I/O 00:34:23.108 May have multiple subsystem ports: Yes 00:34:23.108 May have multiple controllers: Yes 00:34:23.108 Associated with SR-IOV VF: No 00:34:23.109 Max Data Transfer Size: Unlimited 00:34:23.109 Max Number of Namespaces: 1024 00:34:23.109 Max Number of I/O Queues: 128 00:34:23.109 NVMe Specification Version (VS): 1.3 00:34:23.109 NVMe Specification Version (Identify): 1.3 00:34:23.109 Maximum Queue Entries: 1024 00:34:23.109 Contiguous Queues Required: No 00:34:23.109 Arbitration Mechanisms Supported 00:34:23.109 Weighted Round Robin: Not Supported 00:34:23.109 Vendor Specific: Not Supported 00:34:23.109 Reset Timeout: 7500 ms 00:34:23.109 Doorbell Stride: 4 bytes 00:34:23.109 NVM Subsystem Reset: Not Supported 00:34:23.109 Command Sets Supported 00:34:23.109 NVM Command Set: Supported 00:34:23.109 Boot Partition: Not Supported 00:34:23.109 Memory Page Size Minimum: 4096 bytes 00:34:23.109 Memory Page Size Maximum: 4096 bytes 00:34:23.109 Persistent Memory Region: Not Supported 00:34:23.109 Optional Asynchronous Events Supported 00:34:23.109 Namespace Attribute Notices: Supported 00:34:23.109 Firmware Activation Notices: Not Supported 00:34:23.109 ANA Change Notices: Supported 00:34:23.109 PLE Aggregate Log Change Notices: Not Supported 00:34:23.109 LBA Status Info Alert Notices: Not Supported 00:34:23.109 EGE Aggregate Log Change Notices: Not Supported 00:34:23.109 Normal NVM Subsystem Shutdown event: Not Supported 00:34:23.109 Zone Descriptor Change Notices: Not Supported 00:34:23.109 Discovery Log Change Notices: Not Supported 00:34:23.109 Controller Attributes 00:34:23.109 128-bit Host Identifier: Supported 00:34:23.109 Non-Operational Permissive Mode: Not Supported 00:34:23.109 NVM Sets: Not Supported 00:34:23.109 Read Recovery Levels: Not Supported 00:34:23.109 Endurance Groups: Not Supported 00:34:23.109 Predictable Latency Mode: Not Supported 00:34:23.109 Traffic Based Keep ALive: Supported 00:34:23.109 Namespace Granularity: Not Supported 00:34:23.109 SQ Associations: Not Supported 00:34:23.109 UUID List: Not Supported 00:34:23.109 Multi-Domain Subsystem: Not Supported 00:34:23.109 Fixed Capacity Management: Not Supported 00:34:23.109 Variable Capacity Management: Not Supported 00:34:23.109 Delete Endurance Group: Not Supported 00:34:23.109 Delete NVM Set: Not Supported 00:34:23.109 Extended LBA Formats Supported: Not Supported 00:34:23.109 Flexible Data Placement Supported: Not Supported 00:34:23.109 00:34:23.109 Controller Memory Buffer Support 00:34:23.109 ================================ 00:34:23.109 Supported: No 00:34:23.109 00:34:23.109 Persistent Memory Region Support 00:34:23.109 ================================ 00:34:23.109 Supported: No 00:34:23.109 00:34:23.109 Admin Command Set Attributes 00:34:23.109 ============================ 00:34:23.109 Security Send/Receive: Not Supported 00:34:23.109 Format NVM: Not Supported 00:34:23.109 Firmware Activate/Download: Not Supported 00:34:23.109 Namespace Management: Not Supported 00:34:23.109 Device Self-Test: Not Supported 00:34:23.109 Directives: Not Supported 00:34:23.109 NVMe-MI: Not Supported 00:34:23.109 Virtualization Management: Not Supported 00:34:23.109 Doorbell Buffer Config: Not Supported 00:34:23.109 Get LBA Status Capability: Not Supported 00:34:23.109 Command & Feature Lockdown Capability: Not Supported 00:34:23.109 Abort Command Limit: 4 00:34:23.109 Async Event Request Limit: 4 00:34:23.109 Number of Firmware Slots: N/A 00:34:23.109 Firmware Slot 1 Read-Only: N/A 00:34:23.109 Firmware Activation Without Reset: N/A 00:34:23.109 Multiple Update Detection Support: N/A 00:34:23.109 Firmware Update Granularity: No Information Provided 00:34:23.109 Per-Namespace SMART Log: Yes 00:34:23.109 Asymmetric Namespace Access Log Page: Supported 00:34:23.109 ANA Transition Time : 10 sec 00:34:23.109 00:34:23.109 Asymmetric Namespace Access Capabilities 00:34:23.109 ANA Optimized State : Supported 00:34:23.109 ANA Non-Optimized State : Supported 00:34:23.109 ANA Inaccessible State : Supported 00:34:23.109 ANA Persistent Loss State : Supported 00:34:23.109 ANA Change State : Supported 00:34:23.109 ANAGRPID is not changed : No 00:34:23.109 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:23.109 00:34:23.109 ANA Group Identifier Maximum : 128 00:34:23.109 Number of ANA Group Identifiers : 128 00:34:23.109 Max Number of Allowed Namespaces : 1024 00:34:23.109 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:23.109 Command Effects Log Page: Supported 00:34:23.109 Get Log Page Extended Data: Supported 00:34:23.109 Telemetry Log Pages: Not Supported 00:34:23.109 Persistent Event Log Pages: Not Supported 00:34:23.109 Supported Log Pages Log Page: May Support 00:34:23.109 Commands Supported & Effects Log Page: Not Supported 00:34:23.109 Feature Identifiers & Effects Log Page:May Support 00:34:23.109 NVMe-MI Commands & Effects Log Page: May Support 00:34:23.109 Data Area 4 for Telemetry Log: Not Supported 00:34:23.109 Error Log Page Entries Supported: 128 00:34:23.109 Keep Alive: Supported 00:34:23.109 Keep Alive Granularity: 1000 ms 00:34:23.109 00:34:23.109 NVM Command Set Attributes 00:34:23.109 ========================== 00:34:23.109 Submission Queue Entry Size 00:34:23.109 Max: 64 00:34:23.109 Min: 64 00:34:23.109 Completion Queue Entry Size 00:34:23.109 Max: 16 00:34:23.109 Min: 16 00:34:23.109 Number of Namespaces: 1024 00:34:23.109 Compare Command: Not Supported 00:34:23.109 Write Uncorrectable Command: Not Supported 00:34:23.109 Dataset Management Command: Supported 00:34:23.109 Write Zeroes Command: Supported 00:34:23.109 Set Features Save Field: Not Supported 00:34:23.109 Reservations: Not Supported 00:34:23.109 Timestamp: Not Supported 00:34:23.109 Copy: Not Supported 00:34:23.109 Volatile Write Cache: Present 00:34:23.109 Atomic Write Unit (Normal): 1 00:34:23.109 Atomic Write Unit (PFail): 1 00:34:23.109 Atomic Compare & Write Unit: 1 00:34:23.109 Fused Compare & Write: Not Supported 00:34:23.109 Scatter-Gather List 00:34:23.109 SGL Command Set: Supported 00:34:23.109 SGL Keyed: Not Supported 00:34:23.109 SGL Bit Bucket Descriptor: Not Supported 00:34:23.109 SGL Metadata Pointer: Not Supported 00:34:23.109 Oversized SGL: Not Supported 00:34:23.109 SGL Metadata Address: Not Supported 00:34:23.109 SGL Offset: Supported 00:34:23.109 Transport SGL Data Block: Not Supported 00:34:23.109 Replay Protected Memory Block: Not Supported 00:34:23.109 00:34:23.109 Firmware Slot Information 00:34:23.109 ========================= 00:34:23.109 Active slot: 0 00:34:23.109 00:34:23.109 Asymmetric Namespace Access 00:34:23.109 =========================== 00:34:23.109 Change Count : 0 00:34:23.109 Number of ANA Group Descriptors : 1 00:34:23.109 ANA Group Descriptor : 0 00:34:23.109 ANA Group ID : 1 00:34:23.109 Number of NSID Values : 1 00:34:23.109 Change Count : 0 00:34:23.109 ANA State : 1 00:34:23.109 Namespace Identifier : 1 00:34:23.109 00:34:23.109 Commands Supported and Effects 00:34:23.109 ============================== 00:34:23.109 Admin Commands 00:34:23.109 -------------- 00:34:23.109 Get Log Page (02h): Supported 00:34:23.109 Identify (06h): Supported 00:34:23.109 Abort (08h): Supported 00:34:23.109 Set Features (09h): Supported 00:34:23.109 Get Features (0Ah): Supported 00:34:23.109 Asynchronous Event Request (0Ch): Supported 00:34:23.109 Keep Alive (18h): Supported 00:34:23.109 I/O Commands 00:34:23.109 ------------ 00:34:23.109 Flush (00h): Supported 00:34:23.109 Write (01h): Supported LBA-Change 00:34:23.109 Read (02h): Supported 00:34:23.109 Write Zeroes (08h): Supported LBA-Change 00:34:23.109 Dataset Management (09h): Supported 00:34:23.109 00:34:23.109 Error Log 00:34:23.109 ========= 00:34:23.109 Entry: 0 00:34:23.109 Error Count: 0x3 00:34:23.109 Submission Queue Id: 0x0 00:34:23.109 Command Id: 0x5 00:34:23.109 Phase Bit: 0 00:34:23.109 Status Code: 0x2 00:34:23.109 Status Code Type: 0x0 00:34:23.109 Do Not Retry: 1 00:34:23.109 Error Location: 0x28 00:34:23.109 LBA: 0x0 00:34:23.109 Namespace: 0x0 00:34:23.109 Vendor Log Page: 0x0 00:34:23.109 ----------- 00:34:23.109 Entry: 1 00:34:23.109 Error Count: 0x2 00:34:23.109 Submission Queue Id: 0x0 00:34:23.109 Command Id: 0x5 00:34:23.109 Phase Bit: 0 00:34:23.109 Status Code: 0x2 00:34:23.109 Status Code Type: 0x0 00:34:23.109 Do Not Retry: 1 00:34:23.109 Error Location: 0x28 00:34:23.109 LBA: 0x0 00:34:23.109 Namespace: 0x0 00:34:23.109 Vendor Log Page: 0x0 00:34:23.109 ----------- 00:34:23.109 Entry: 2 00:34:23.109 Error Count: 0x1 00:34:23.109 Submission Queue Id: 0x0 00:34:23.110 Command Id: 0x4 00:34:23.110 Phase Bit: 0 00:34:23.110 Status Code: 0x2 00:34:23.110 Status Code Type: 0x0 00:34:23.110 Do Not Retry: 1 00:34:23.110 Error Location: 0x28 00:34:23.110 LBA: 0x0 00:34:23.110 Namespace: 0x0 00:34:23.110 Vendor Log Page: 0x0 00:34:23.110 00:34:23.110 Number of Queues 00:34:23.110 ================ 00:34:23.110 Number of I/O Submission Queues: 128 00:34:23.110 Number of I/O Completion Queues: 128 00:34:23.110 00:34:23.110 ZNS Specific Controller Data 00:34:23.110 ============================ 00:34:23.110 Zone Append Size Limit: 0 00:34:23.110 00:34:23.110 00:34:23.110 Active Namespaces 00:34:23.110 ================= 00:34:23.110 get_feature(0x05) failed 00:34:23.110 Namespace ID:1 00:34:23.110 Command Set Identifier: NVM (00h) 00:34:23.110 Deallocate: Supported 00:34:23.110 Deallocated/Unwritten Error: Not Supported 00:34:23.110 Deallocated Read Value: Unknown 00:34:23.110 Deallocate in Write Zeroes: Not Supported 00:34:23.110 Deallocated Guard Field: 0xFFFF 00:34:23.110 Flush: Supported 00:34:23.110 Reservation: Not Supported 00:34:23.110 Namespace Sharing Capabilities: Multiple Controllers 00:34:23.110 Size (in LBAs): 1953525168 (931GiB) 00:34:23.110 Capacity (in LBAs): 1953525168 (931GiB) 00:34:23.110 Utilization (in LBAs): 1953525168 (931GiB) 00:34:23.110 UUID: ed12ec20-cf43-4fa5-9b61-8d36bb53a702 00:34:23.110 Thin Provisioning: Not Supported 00:34:23.110 Per-NS Atomic Units: Yes 00:34:23.110 Atomic Boundary Size (Normal): 0 00:34:23.110 Atomic Boundary Size (PFail): 0 00:34:23.110 Atomic Boundary Offset: 0 00:34:23.110 NGUID/EUI64 Never Reused: No 00:34:23.110 ANA group ID: 1 00:34:23.110 Namespace Write Protected: No 00:34:23.110 Number of LBA Formats: 1 00:34:23.110 Current LBA Format: LBA Format #00 00:34:23.110 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:23.110 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.110 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.368 rmmod nvme_tcp 00:34:23.368 rmmod nvme_fabrics 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.368 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.369 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.369 09:34:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:25.292 09:34:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:26.668 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:26.668 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:26.668 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:26.668 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:26.668 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:26.668 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:26.669 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:26.669 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:26.669 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:27.605 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:27.864 00:34:27.864 real 0m9.668s 00:34:27.864 user 0m2.095s 00:34:27.864 sys 0m3.588s 00:34:27.864 09:34:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:27.865 ************************************ 00:34:27.865 END TEST nvmf_identify_kernel_target 00:34:27.865 ************************************ 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.865 ************************************ 00:34:27.865 START TEST nvmf_auth_host 00:34:27.865 ************************************ 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:27.865 * Looking for test storage... 00:34:27.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.865 --rc genhtml_branch_coverage=1 00:34:27.865 --rc genhtml_function_coverage=1 00:34:27.865 --rc genhtml_legend=1 00:34:27.865 --rc geninfo_all_blocks=1 00:34:27.865 --rc geninfo_unexecuted_blocks=1 00:34:27.865 00:34:27.865 ' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.865 --rc genhtml_branch_coverage=1 00:34:27.865 --rc genhtml_function_coverage=1 00:34:27.865 --rc genhtml_legend=1 00:34:27.865 --rc geninfo_all_blocks=1 00:34:27.865 --rc geninfo_unexecuted_blocks=1 00:34:27.865 00:34:27.865 ' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.865 --rc genhtml_branch_coverage=1 00:34:27.865 --rc genhtml_function_coverage=1 00:34:27.865 --rc genhtml_legend=1 00:34:27.865 --rc geninfo_all_blocks=1 00:34:27.865 --rc geninfo_unexecuted_blocks=1 00:34:27.865 00:34:27.865 ' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.865 --rc genhtml_branch_coverage=1 00:34:27.865 --rc genhtml_function_coverage=1 00:34:27.865 --rc genhtml_legend=1 00:34:27.865 --rc geninfo_all_blocks=1 00:34:27.865 --rc geninfo_unexecuted_blocks=1 00:34:27.865 00:34:27.865 ' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.865 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:27.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.866 09:34:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.399 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:30.400 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:30.400 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:30.400 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:30.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:34:30.400 00:34:30.400 --- 10.0.0.2 ping statistics --- 00:34:30.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.400 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:34:30.400 00:34:30.400 --- 10.0.0.1 ping statistics --- 00:34:30.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.400 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3118516 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3118516 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3118516 ']' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.400 09:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=167410a2b93767b8449d3fbcc0960083 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iA2 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 167410a2b93767b8449d3fbcc0960083 0 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 167410a2b93767b8449d3fbcc0960083 0 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=167410a2b93767b8449d3fbcc0960083 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iA2 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iA2 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.iA2 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:31.403 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc36be06dfabed0e7870f339907b8af2163681c867768bb6521d320f45d2d704 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Flh 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc36be06dfabed0e7870f339907b8af2163681c867768bb6521d320f45d2d704 3 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc36be06dfabed0e7870f339907b8af2163681c867768bb6521d320f45d2d704 3 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc36be06dfabed0e7870f339907b8af2163681c867768bb6521d320f45d2d704 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Flh 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Flh 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Flh 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e853fc1709e22e98cb22d1a681aa0239b6fcd13f375561e5 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PKj 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e853fc1709e22e98cb22d1a681aa0239b6fcd13f375561e5 0 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e853fc1709e22e98cb22d1a681aa0239b6fcd13f375561e5 0 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e853fc1709e22e98cb22d1a681aa0239b6fcd13f375561e5 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PKj 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PKj 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.PKj 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43f042ab7d278711ac2214e19086db8b15223c165fa37153 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.U8v 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43f042ab7d278711ac2214e19086db8b15223c165fa37153 2 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43f042ab7d278711ac2214e19086db8b15223c165fa37153 2 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43f042ab7d278711ac2214e19086db8b15223c165fa37153 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.U8v 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.U8v 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.U8v 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa87e28500dcc48a8f490b413ebb284c 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4Rt 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa87e28500dcc48a8f490b413ebb284c 1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa87e28500dcc48a8f490b413ebb284c 1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa87e28500dcc48a8f490b413ebb284c 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4Rt 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4Rt 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4Rt 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6533ca185f8aebc362fa1055ee22ba3e 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LAc 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6533ca185f8aebc362fa1055ee22ba3e 1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6533ca185f8aebc362fa1055ee22ba3e 1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6533ca185f8aebc362fa1055ee22ba3e 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LAc 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LAc 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.LAc 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.404 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=58cfa1b738f14db0ec7fcce235b325a561ec214c0248d0a2 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZSJ 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 58cfa1b738f14db0ec7fcce235b325a561ec214c0248d0a2 2 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 58cfa1b738f14db0ec7fcce235b325a561ec214c0248d0a2 2 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=58cfa1b738f14db0ec7fcce235b325a561ec214c0248d0a2 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:31.405 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZSJ 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZSJ 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZSJ 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f1043757dcdca4d27d0e9fedb0ef29a 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.I4B 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f1043757dcdca4d27d0e9fedb0ef29a 0 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f1043757dcdca4d27d0e9fedb0ef29a 0 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f1043757dcdca4d27d0e9fedb0ef29a 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.I4B 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.I4B 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.I4B 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fe4f78968294643e61a70b92b1315f41ef1b2b1cb661d33b5fb8d6feadcb32c3 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1x8 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fe4f78968294643e61a70b92b1315f41ef1b2b1cb661d33b5fb8d6feadcb32c3 3 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fe4f78968294643e61a70b92b1315f41ef1b2b1cb661d33b5fb8d6feadcb32c3 3 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fe4f78968294643e61a70b92b1315f41ef1b2b1cb661d33b5fb8d6feadcb32c3 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1x8 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1x8 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1x8 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3118516 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3118516 ']' 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.663 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.664 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.664 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iA2 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Flh ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Flh 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.PKj 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.U8v ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U8v 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4Rt 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.LAc ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LAc 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZSJ 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.I4B ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.I4B 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1x8 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.922 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:31.923 09:34:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:33.297 Waiting for block devices as requested 00:34:33.297 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:33.297 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:33.298 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:33.298 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:33.555 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:33.555 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:33.555 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:33.556 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:33.814 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:33.814 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:33.814 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:33.814 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:34.072 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:34.072 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:34.072 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:34.072 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:34.330 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:34.588 No valid GPT data, bailing 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:34.588 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:34.846 00:34:34.846 Discovery Log Number of Records 2, Generation counter 2 00:34:34.846 =====Discovery Log Entry 0====== 00:34:34.846 trtype: tcp 00:34:34.846 adrfam: ipv4 00:34:34.846 subtype: current discovery subsystem 00:34:34.846 treq: not specified, sq flow control disable supported 00:34:34.846 portid: 1 00:34:34.846 trsvcid: 4420 00:34:34.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:34.846 traddr: 10.0.0.1 00:34:34.846 eflags: none 00:34:34.846 sectype: none 00:34:34.846 =====Discovery Log Entry 1====== 00:34:34.846 trtype: tcp 00:34:34.846 adrfam: ipv4 00:34:34.846 subtype: nvme subsystem 00:34:34.846 treq: not specified, sq flow control disable supported 00:34:34.846 portid: 1 00:34:34.846 trsvcid: 4420 00:34:34.846 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:34.846 traddr: 10.0.0.1 00:34:34.846 eflags: none 00:34:34.846 sectype: none 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.846 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.105 nvme0n1 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:35.105 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.106 09:34:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.106 nvme0n1 00:34:35.106 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.106 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.106 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.106 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.106 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.365 nvme0n1 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.365 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.624 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.624 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.624 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.624 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.624 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.625 nvme0n1 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.625 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.884 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.885 nvme0n1 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.885 09:34:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.143 nvme0n1 00:34:36.143 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.144 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.402 nvme0n1 00:34:36.402 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.402 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.402 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.403 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.662 nvme0n1 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.662 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.920 nvme0n1 00:34:36.920 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.920 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.920 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.920 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.921 09:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.179 nvme0n1 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.179 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.180 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 nvme0n1 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.438 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.439 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.439 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.439 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.439 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.439 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.439 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.005 nvme0n1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.005 09:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.264 nvme0n1 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.264 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.265 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.524 nvme0n1 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.524 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.783 nvme0n1 00:34:38.783 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.783 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.783 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.783 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.783 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.041 09:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.300 nvme0n1 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.300 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.867 nvme0n1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.867 09:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.433 nvme0n1 00:34:40.433 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.434 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.691 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.692 09:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.258 nvme0n1 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.258 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.824 nvme0n1 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.824 09:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 nvme0n1 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.390 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.391 09:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.324 nvme0n1 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.324 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.582 09:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.516 nvme0n1 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.516 09:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.450 nvme0n1 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.450 09:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.385 nvme0n1 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.385 09:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.319 nvme0n1 00:34:47.319 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.319 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.319 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.319 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.319 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.319 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 nvme0n1 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.578 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.579 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.837 nvme0n1 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.837 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.095 nvme0n1 00:34:48.095 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.095 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.095 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.095 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.095 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.095 09:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.095 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.352 nvme0n1 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.352 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.353 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.611 nvme0n1 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.611 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.870 nvme0n1 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.870 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.871 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.129 nvme0n1 00:34:49.129 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.129 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.129 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.129 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.129 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.129 09:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.129 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.394 nvme0n1 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:49.394 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.395 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.655 nvme0n1 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.655 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.656 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 nvme0n1 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.915 09:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.174 nvme0n1 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:50.174 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.432 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.690 nvme0n1 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.690 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.691 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.691 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:50.691 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.691 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.949 nvme0n1 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.949 09:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.516 nvme0n1 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.516 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.775 nvme0n1 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.775 09:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 nvme0n1 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.342 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.343 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.909 nvme0n1 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.909 09:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.475 nvme0n1 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.475 09:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.041 nvme0n1 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.041 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:54.299 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.300 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.865 nvme0n1 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.865 09:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.798 nvme0n1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.799 09:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.761 nvme0n1 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.761 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.762 09:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.137 nvme0n1 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:58.137 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.138 09:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.073 nvme0n1 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.073 09:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.096 nvme0n1 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.096 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.097 nvme0n1 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.097 09:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.097 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.373 nvme0n1 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:00.373 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.374 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.632 nvme0n1 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.632 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.891 nvme0n1 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.891 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.150 nvme0n1 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.150 09:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.408 nvme0n1 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:01.408 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.409 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 nvme0n1 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.667 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.926 nvme0n1 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.926 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.185 nvme0n1 00:35:02.185 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.185 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.185 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.185 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.185 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.185 09:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.185 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.444 nvme0n1 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.444 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.701 nvme0n1 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.701 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.702 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.702 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.702 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.959 nvme0n1 00:35:02.959 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.217 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.217 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.217 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.217 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.217 09:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.217 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.475 nvme0n1 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.475 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.733 nvme0n1 00:35:03.733 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.733 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.733 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.733 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.733 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.733 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.991 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.992 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.992 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:03.992 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.992 09:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.251 nvme0n1 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.251 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.816 nvme0n1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.816 09:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.383 nvme0n1 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.383 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.950 nvme0n1 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.950 09:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.516 nvme0n1 00:35:06.516 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.516 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.516 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.516 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.516 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.516 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.774 09:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.341 nvme0n1 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTY3NDEwYTJiOTM3NjdiODQ0OWQzZmJjYzA5NjAwODOCVOl1: 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmMzNmJlMDZkZmFiZWQwZTc4NzBmMzM5OTA3YjhhZjIxNjM2ODFjODY3NzY4YmI2NTIxZDMyMGY0NWQyZDcwNClOKQE=: 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.341 09:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.275 nvme0n1 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.275 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.276 09:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.210 nvme0n1 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.210 09:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.143 nvme0n1 00:35:10.143 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.143 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.143 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.143 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NThjZmExYjczOGYxNGRiMGVjN2ZjY2UyMzViMzI1YTU2MWVjMjE0YzAyNDhkMGEymDelkg==: 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2YxMDQzNzU3ZGNkY2E0ZDI3ZDBlOWZlZGIwZWYyOWF+2YFm: 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.144 09:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.518 nvme0n1 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmU0Zjc4OTY4Mjk0NjQzZTYxYTcwYjkyYjEzMTVmNDFlZjFiMmIxY2I2NjFkMzNiNWZiOGQ2ZmVhZGNiMzJjM7z1+uM=: 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.518 09:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.451 nvme0n1 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:12.451 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.452 request: 00:35:12.452 { 00:35:12.452 "name": "nvme0", 00:35:12.452 "trtype": "tcp", 00:35:12.452 "traddr": "10.0.0.1", 00:35:12.452 "adrfam": "ipv4", 00:35:12.452 "trsvcid": "4420", 00:35:12.452 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:12.452 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:12.452 "prchk_reftag": false, 00:35:12.452 "prchk_guard": false, 00:35:12.452 "hdgst": false, 00:35:12.452 "ddgst": false, 00:35:12.452 "allow_unrecognized_csi": false, 00:35:12.452 "method": "bdev_nvme_attach_controller", 00:35:12.452 "req_id": 1 00:35:12.452 } 00:35:12.452 Got JSON-RPC error response 00:35:12.452 response: 00:35:12.452 { 00:35:12.452 "code": -5, 00:35:12.452 "message": "Input/output error" 00:35:12.452 } 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.452 request: 00:35:12.452 { 00:35:12.452 "name": "nvme0", 00:35:12.452 "trtype": "tcp", 00:35:12.452 "traddr": "10.0.0.1", 00:35:12.452 "adrfam": "ipv4", 00:35:12.452 "trsvcid": "4420", 00:35:12.452 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:12.452 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:12.452 "prchk_reftag": false, 00:35:12.452 "prchk_guard": false, 00:35:12.452 "hdgst": false, 00:35:12.452 "ddgst": false, 00:35:12.452 "dhchap_key": "key2", 00:35:12.452 "allow_unrecognized_csi": false, 00:35:12.452 "method": "bdev_nvme_attach_controller", 00:35:12.452 "req_id": 1 00:35:12.452 } 00:35:12.452 Got JSON-RPC error response 00:35:12.452 response: 00:35:12.452 { 00:35:12.452 "code": -5, 00:35:12.452 "message": "Input/output error" 00:35:12.452 } 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:12.452 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.710 request: 00:35:12.710 { 00:35:12.710 "name": "nvme0", 00:35:12.710 "trtype": "tcp", 00:35:12.710 "traddr": "10.0.0.1", 00:35:12.710 "adrfam": "ipv4", 00:35:12.710 "trsvcid": "4420", 00:35:12.710 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:12.710 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:12.710 "prchk_reftag": false, 00:35:12.710 "prchk_guard": false, 00:35:12.710 "hdgst": false, 00:35:12.710 "ddgst": false, 00:35:12.710 "dhchap_key": "key1", 00:35:12.710 "dhchap_ctrlr_key": "ckey2", 00:35:12.710 "allow_unrecognized_csi": false, 00:35:12.710 "method": "bdev_nvme_attach_controller", 00:35:12.710 "req_id": 1 00:35:12.710 } 00:35:12.710 Got JSON-RPC error response 00:35:12.710 response: 00:35:12.710 { 00:35:12.710 "code": -5, 00:35:12.710 "message": "Input/output error" 00:35:12.710 } 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.710 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.969 nvme0n1 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:12.969 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.970 request: 00:35:12.970 { 00:35:12.970 "name": "nvme0", 00:35:12.970 "dhchap_key": "key1", 00:35:12.970 "dhchap_ctrlr_key": "ckey2", 00:35:12.970 "method": "bdev_nvme_set_keys", 00:35:12.970 "req_id": 1 00:35:12.970 } 00:35:12.970 Got JSON-RPC error response 00:35:12.970 response: 00:35:12.970 { 00:35:12.970 "code": -13, 00:35:12.970 "message": "Permission denied" 00:35:12.970 } 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:12.970 09:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:14.352 09:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:15.289 09:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.289 09:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:15.289 09:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTg1M2ZjMTcwOWUyMmU5OGNiMjJkMWE2ODFhYTAyMzliNmZjZDEzZjM3NTU2MWU1YdCrRw==: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDNmMDQyYWI3ZDI3ODcxMWFjMjIxNGUxOTA4NmRiOGIxNTIyM2MxNjVmYTM3MTUzWUs3Gw==: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.289 nvme0n1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWE4N2UyODUwMGRjYzQ4YThmNDkwYjQxM2ViYjI4NGNi8TS4: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: ]] 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjUzM2NhMTg1ZjhhZWJjMzYyZmExMDU1ZWUyMmJhM2UFu8bR: 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.289 request: 00:35:15.289 { 00:35:15.289 "name": "nvme0", 00:35:15.289 "dhchap_key": "key2", 00:35:15.289 "dhchap_ctrlr_key": "ckey1", 00:35:15.289 "method": "bdev_nvme_set_keys", 00:35:15.289 "req_id": 1 00:35:15.289 } 00:35:15.289 Got JSON-RPC error response 00:35:15.289 response: 00:35:15.289 { 00:35:15.289 "code": -13, 00:35:15.289 "message": "Permission denied" 00:35:15.289 } 00:35:15.289 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.290 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.548 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:15.548 09:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.482 rmmod nvme_tcp 00:35:16.482 rmmod nvme_fabrics 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3118516 ']' 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3118516 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3118516 ']' 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3118516 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3118516 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3118516' 00:35:16.482 killing process with pid 3118516 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3118516 00:35:16.482 09:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3118516 00:35:17.418 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:17.418 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:17.418 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.419 09:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.952 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:19.953 09:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:20.887 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:20.887 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:20.887 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:21.825 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:21.825 09:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.iA2 /tmp/spdk.key-null.PKj /tmp/spdk.key-sha256.4Rt /tmp/spdk.key-sha384.ZSJ /tmp/spdk.key-sha512.1x8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:21.825 09:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:23.201 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:23.201 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:23.201 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:23.201 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:23.201 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:23.201 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:23.201 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:23.201 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:23.201 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:23.201 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:23.201 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:23.201 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:23.201 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:23.201 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:23.201 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:23.201 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:23.201 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:23.201 00:35:23.201 real 0m55.392s 00:35:23.201 user 0m52.944s 00:35:23.201 sys 0m6.232s 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.201 ************************************ 00:35:23.201 END TEST nvmf_auth_host 00:35:23.201 ************************************ 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.201 ************************************ 00:35:23.201 START TEST nvmf_digest 00:35:23.201 ************************************ 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:23.201 * Looking for test storage... 00:35:23.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:35:23.201 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.460 --rc genhtml_branch_coverage=1 00:35:23.460 --rc genhtml_function_coverage=1 00:35:23.460 --rc genhtml_legend=1 00:35:23.460 --rc geninfo_all_blocks=1 00:35:23.460 --rc geninfo_unexecuted_blocks=1 00:35:23.460 00:35:23.460 ' 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.460 --rc genhtml_branch_coverage=1 00:35:23.460 --rc genhtml_function_coverage=1 00:35:23.460 --rc genhtml_legend=1 00:35:23.460 --rc geninfo_all_blocks=1 00:35:23.460 --rc geninfo_unexecuted_blocks=1 00:35:23.460 00:35:23.460 ' 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.460 --rc genhtml_branch_coverage=1 00:35:23.460 --rc genhtml_function_coverage=1 00:35:23.460 --rc genhtml_legend=1 00:35:23.460 --rc geninfo_all_blocks=1 00:35:23.460 --rc geninfo_unexecuted_blocks=1 00:35:23.460 00:35:23.460 ' 00:35:23.460 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:23.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.461 --rc genhtml_branch_coverage=1 00:35:23.461 --rc genhtml_function_coverage=1 00:35:23.461 --rc genhtml_legend=1 00:35:23.461 --rc geninfo_all_blocks=1 00:35:23.461 --rc geninfo_unexecuted_blocks=1 00:35:23.461 00:35:23.461 ' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:23.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:23.461 09:35:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.361 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:25.362 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:25.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:25.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:25.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.362 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:35:25.621 00:35:25.621 --- 10.0.0.2 ping statistics --- 00:35:25.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.621 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:35:25.621 00:35:25.621 --- 10.0.0.1 ping statistics --- 00:35:25.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.621 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.621 ************************************ 00:35:25.621 START TEST nvmf_digest_clean 00:35:25.621 ************************************ 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3128642 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3128642 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3128642 ']' 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.621 09:35:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.621 [2024-11-17 09:35:30.573284] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:25.621 [2024-11-17 09:35:30.573449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.880 [2024-11-17 09:35:30.755768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.138 [2024-11-17 09:35:30.895267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.138 [2024-11-17 09:35:30.895361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.138 [2024-11-17 09:35:30.895392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.138 [2024-11-17 09:35:30.895413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.138 [2024-11-17 09:35:30.895430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.138 [2024-11-17 09:35:30.896843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.703 09:35:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:27.269 null0 00:35:27.269 [2024-11-17 09:35:32.047855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.269 [2024-11-17 09:35:32.072167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3128801 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3128801 /var/tmp/bperf.sock 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3128801 ']' 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.269 09:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:27.269 [2024-11-17 09:35:32.160576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:27.269 [2024-11-17 09:35:32.160740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128801 ] 00:35:27.527 [2024-11-17 09:35:32.303122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.527 [2024-11-17 09:35:32.439598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.461 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.461 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:28.461 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:28.461 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:28.461 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:29.026 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.026 09:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.283 nvme0n1 00:35:29.283 09:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:29.283 09:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:29.541 Running I/O for 2 seconds... 00:35:31.410 13652.00 IOPS, 53.33 MiB/s [2024-11-17T08:35:36.423Z] 13887.50 IOPS, 54.25 MiB/s 00:35:31.410 Latency(us) 00:35:31.410 [2024-11-17T08:35:36.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.410 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:31.410 nvme0n1 : 2.01 13905.78 54.32 0.00 0.00 9191.01 4684.61 22622.06 00:35:31.410 [2024-11-17T08:35:36.423Z] =================================================================================================================== 00:35:31.410 [2024-11-17T08:35:36.423Z] Total : 13905.78 54.32 0.00 0.00 9191.01 4684.61 22622.06 00:35:31.410 { 00:35:31.410 "results": [ 00:35:31.410 { 00:35:31.410 "job": "nvme0n1", 00:35:31.410 "core_mask": "0x2", 00:35:31.410 "workload": "randread", 00:35:31.410 "status": "finished", 00:35:31.410 "queue_depth": 128, 00:35:31.410 "io_size": 4096, 00:35:31.410 "runtime": 2.006576, 00:35:31.410 "iops": 13905.7778025851, 00:35:31.410 "mibps": 54.31944454134805, 00:35:31.410 "io_failed": 0, 00:35:31.410 "io_timeout": 0, 00:35:31.410 "avg_latency_us": 9191.005125321717, 00:35:31.410 "min_latency_us": 4684.61037037037, 00:35:31.410 "max_latency_us": 22622.056296296298 00:35:31.410 } 00:35:31.410 ], 00:35:31.410 "core_count": 1 00:35:31.410 } 00:35:31.410 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:31.410 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:31.410 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:31.410 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:31.410 | select(.opcode=="crc32c") 00:35:31.410 | "\(.module_name) \(.executed)"' 00:35:31.410 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3128801 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3128801 ']' 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3128801 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.668 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128801 00:35:31.989 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:31.989 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:31.989 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128801' 00:35:31.989 killing process with pid 3128801 00:35:31.989 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3128801 00:35:31.989 Received shutdown signal, test time was about 2.000000 seconds 00:35:31.989 00:35:31.989 Latency(us) 00:35:31.989 [2024-11-17T08:35:37.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.989 [2024-11-17T08:35:37.002Z] =================================================================================================================== 00:35:31.989 [2024-11-17T08:35:37.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:31.989 09:35:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3128801 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3129468 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3129468 /var/tmp/bperf.sock 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3129468 ']' 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.573 09:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:32.831 [2024-11-17 09:35:37.663600] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:32.831 [2024-11-17 09:35:37.663744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129468 ] 00:35:32.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:32.831 Zero copy mechanism will not be used. 00:35:32.831 [2024-11-17 09:35:37.806357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.088 [2024-11-17 09:35:37.940965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.653 09:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.653 09:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:33.653 09:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:33.653 09:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:33.653 09:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:34.587 09:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.587 09:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.844 nvme0n1 00:35:34.844 09:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:34.844 09:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:34.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:34.844 Zero copy mechanism will not be used. 00:35:34.844 Running I/O for 2 seconds... 00:35:37.150 4239.00 IOPS, 529.88 MiB/s [2024-11-17T08:35:42.163Z] 4338.50 IOPS, 542.31 MiB/s 00:35:37.150 Latency(us) 00:35:37.150 [2024-11-17T08:35:42.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.150 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:37.150 nvme0n1 : 2.00 4337.16 542.15 0.00 0.00 3682.59 940.56 8058.50 00:35:37.150 [2024-11-17T08:35:42.163Z] =================================================================================================================== 00:35:37.150 [2024-11-17T08:35:42.163Z] Total : 4337.16 542.15 0.00 0.00 3682.59 940.56 8058.50 00:35:37.150 { 00:35:37.150 "results": [ 00:35:37.150 { 00:35:37.150 "job": "nvme0n1", 00:35:37.150 "core_mask": "0x2", 00:35:37.150 "workload": "randread", 00:35:37.150 "status": "finished", 00:35:37.150 "queue_depth": 16, 00:35:37.150 "io_size": 131072, 00:35:37.150 "runtime": 2.004536, 00:35:37.150 "iops": 4337.163313604745, 00:35:37.150 "mibps": 542.1454142005931, 00:35:37.150 "io_failed": 0, 00:35:37.150 "io_timeout": 0, 00:35:37.150 "avg_latency_us": 3682.5866428102822, 00:35:37.150 "min_latency_us": 940.562962962963, 00:35:37.150 "max_latency_us": 8058.500740740741 00:35:37.150 } 00:35:37.150 ], 00:35:37.150 "core_count": 1 00:35:37.150 } 00:35:37.150 09:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:37.150 09:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:37.150 09:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:37.150 09:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:37.150 09:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:37.150 | select(.opcode=="crc32c") 00:35:37.150 | "\(.module_name) \(.executed)"' 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3129468 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3129468 ']' 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3129468 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129468 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129468' 00:35:37.150 killing process with pid 3129468 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3129468 00:35:37.150 Received shutdown signal, test time was about 2.000000 seconds 00:35:37.150 00:35:37.150 Latency(us) 00:35:37.150 [2024-11-17T08:35:42.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.150 [2024-11-17T08:35:42.163Z] =================================================================================================================== 00:35:37.150 [2024-11-17T08:35:42.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:37.150 09:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3129468 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130127 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130127 /var/tmp/bperf.sock 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130127 ']' 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:38.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.084 09:35:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:38.342 [2024-11-17 09:35:43.148875] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:38.342 [2024-11-17 09:35:43.149017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130127 ] 00:35:38.342 [2024-11-17 09:35:43.291723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.599 [2024-11-17 09:35:43.434317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.532 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.532 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:39.532 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:39.532 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:39.532 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:39.790 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:39.790 09:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.356 nvme0n1 00:35:40.356 09:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:40.356 09:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:40.356 Running I/O for 2 seconds... 00:35:42.663 14931.00 IOPS, 58.32 MiB/s [2024-11-17T08:35:47.676Z] 14525.50 IOPS, 56.74 MiB/s 00:35:42.663 Latency(us) 00:35:42.663 [2024-11-17T08:35:47.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.663 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:42.663 nvme0n1 : 2.01 14522.56 56.73 0.00 0.00 8788.26 3592.34 16019.91 00:35:42.663 [2024-11-17T08:35:47.676Z] =================================================================================================================== 00:35:42.663 [2024-11-17T08:35:47.676Z] Total : 14522.56 56.73 0.00 0.00 8788.26 3592.34 16019.91 00:35:42.663 { 00:35:42.663 "results": [ 00:35:42.663 { 00:35:42.663 "job": "nvme0n1", 00:35:42.663 "core_mask": "0x2", 00:35:42.663 "workload": "randwrite", 00:35:42.663 "status": "finished", 00:35:42.663 "queue_depth": 128, 00:35:42.663 "io_size": 4096, 00:35:42.663 "runtime": 2.010872, 00:35:42.663 "iops": 14522.555388905908, 00:35:42.663 "mibps": 56.728731987913704, 00:35:42.663 "io_failed": 0, 00:35:42.663 "io_timeout": 0, 00:35:42.663 "avg_latency_us": 8788.258984046539, 00:35:42.663 "min_latency_us": 3592.343703703704, 00:35:42.663 "max_latency_us": 16019.91111111111 00:35:42.663 } 00:35:42.663 ], 00:35:42.663 "core_count": 1 00:35:42.663 } 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:42.663 | select(.opcode=="crc32c") 00:35:42.663 | "\(.module_name) \(.executed)"' 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130127 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130127 ']' 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130127 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.663 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130127 00:35:42.664 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.664 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.664 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130127' 00:35:42.664 killing process with pid 3130127 00:35:42.664 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130127 00:35:42.664 Received shutdown signal, test time was about 2.000000 seconds 00:35:42.664 00:35:42.664 Latency(us) 00:35:42.664 [2024-11-17T08:35:47.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.664 [2024-11-17T08:35:47.677Z] =================================================================================================================== 00:35:42.664 [2024-11-17T08:35:47.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.664 09:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130127 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3130796 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3130796 /var/tmp/bperf.sock 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3130796 ']' 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.598 09:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:43.856 [2024-11-17 09:35:48.629271] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:43.856 [2024-11-17 09:35:48.629432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130796 ] 00:35:43.856 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:43.856 Zero copy mechanism will not be used. 00:35:43.856 [2024-11-17 09:35:48.762383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.116 [2024-11-17 09:35:48.891781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.682 09:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.682 09:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:44.682 09:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:44.682 09:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:44.682 09:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:45.248 09:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.248 09:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.814 nvme0n1 00:35:45.814 09:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:45.814 09:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:46.072 Zero copy mechanism will not be used. 00:35:46.072 Running I/O for 2 seconds... 00:35:47.941 4327.00 IOPS, 540.88 MiB/s [2024-11-17T08:35:52.954Z] 4344.00 IOPS, 543.00 MiB/s 00:35:47.941 Latency(us) 00:35:47.941 [2024-11-17T08:35:52.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.941 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:47.941 nvme0n1 : 2.00 4342.85 542.86 0.00 0.00 3673.77 2839.89 7475.96 00:35:47.941 [2024-11-17T08:35:52.954Z] =================================================================================================================== 00:35:47.941 [2024-11-17T08:35:52.954Z] Total : 4342.85 542.86 0.00 0.00 3673.77 2839.89 7475.96 00:35:47.941 { 00:35:47.941 "results": [ 00:35:47.941 { 00:35:47.941 "job": "nvme0n1", 00:35:47.941 "core_mask": "0x2", 00:35:47.941 "workload": "randwrite", 00:35:47.941 "status": "finished", 00:35:47.941 "queue_depth": 16, 00:35:47.941 "io_size": 131072, 00:35:47.941 "runtime": 2.004214, 00:35:47.941 "iops": 4342.849615859384, 00:35:47.941 "mibps": 542.856201982423, 00:35:47.941 "io_failed": 0, 00:35:47.941 "io_timeout": 0, 00:35:47.941 "avg_latency_us": 3673.7722004357297, 00:35:47.941 "min_latency_us": 2839.8933333333334, 00:35:47.941 "max_latency_us": 7475.958518518519 00:35:47.941 } 00:35:47.941 ], 00:35:47.941 "core_count": 1 00:35:47.941 } 00:35:47.941 09:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:47.941 09:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:47.941 09:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:47.941 09:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:47.941 | select(.opcode=="crc32c") 00:35:47.941 | "\(.module_name) \(.executed)"' 00:35:47.941 09:35:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3130796 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3130796 ']' 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3130796 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130796 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130796' 00:35:48.199 killing process with pid 3130796 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3130796 00:35:48.199 Received shutdown signal, test time was about 2.000000 seconds 00:35:48.199 00:35:48.199 Latency(us) 00:35:48.199 [2024-11-17T08:35:53.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.199 [2024-11-17T08:35:53.212Z] =================================================================================================================== 00:35:48.199 [2024-11-17T08:35:53.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.199 09:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3130796 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3128642 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3128642 ']' 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3128642 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128642 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128642' 00:35:49.132 killing process with pid 3128642 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3128642 00:35:49.132 09:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3128642 00:35:50.505 00:35:50.505 real 0m24.797s 00:35:50.505 user 0m48.748s 00:35:50.505 sys 0m4.739s 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:50.505 ************************************ 00:35:50.505 END TEST nvmf_digest_clean 00:35:50.505 ************************************ 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.505 ************************************ 00:35:50.505 START TEST nvmf_digest_error 00:35:50.505 ************************************ 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3131620 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3131620 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3131620 ']' 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.505 09:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.505 [2024-11-17 09:35:55.423567] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:50.505 [2024-11-17 09:35:55.423729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.763 [2024-11-17 09:35:55.574850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.763 [2024-11-17 09:35:55.696225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.763 [2024-11-17 09:35:55.696310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.763 [2024-11-17 09:35:55.696331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.763 [2024-11-17 09:35:55.696373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.763 [2024-11-17 09:35:55.696404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.763 [2024-11-17 09:35:55.697820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.697 [2024-11-17 09:35:56.460622] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.697 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.955 null0 00:35:51.955 [2024-11-17 09:35:56.824445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.955 [2024-11-17 09:35:56.848784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3131773 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3131773 /var/tmp/bperf.sock 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3131773 ']' 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:51.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.955 09:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.955 [2024-11-17 09:35:56.935462] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:51.955 [2024-11-17 09:35:56.935613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131773 ] 00:35:52.213 [2024-11-17 09:35:57.076146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.213 [2024-11-17 09:35:57.211585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.144 09:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.144 09:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:53.144 09:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:53.144 09:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:53.401 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:53.401 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.401 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.401 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.401 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.401 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.659 nvme0n1 00:35:53.659 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:53.659 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.659 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.659 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.659 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:53.659 09:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.917 Running I/O for 2 seconds... 00:35:53.917 [2024-11-17 09:35:58.776349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.917 [2024-11-17 09:35:58.776484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.917 [2024-11-17 09:35:58.776518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.917 [2024-11-17 09:35:58.793555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.917 [2024-11-17 09:35:58.793613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.917 [2024-11-17 09:35:58.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.917 [2024-11-17 09:35:58.813232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.917 [2024-11-17 09:35:58.813282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.917 [2024-11-17 09:35:58.813310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.917 [2024-11-17 09:35:58.834910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.917 [2024-11-17 09:35:58.834960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.917 [2024-11-17 09:35:58.834990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.917 [2024-11-17 09:35:58.853664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.917 [2024-11-17 09:35:58.853709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.917 [2024-11-17 09:35:58.853736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.918 [2024-11-17 09:35:58.870547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.918 [2024-11-17 09:35:58.870605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.918 [2024-11-17 09:35:58.870633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.918 [2024-11-17 09:35:58.887145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.918 [2024-11-17 09:35:58.887185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.918 [2024-11-17 09:35:58.887228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.918 [2024-11-17 09:35:58.907695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.918 [2024-11-17 09:35:58.907755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.918 [2024-11-17 09:35:58.907785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.918 [2024-11-17 09:35:58.925469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:53.918 [2024-11-17 09:35:58.925527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.918 [2024-11-17 09:35:58.925554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:58.942263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:58.942311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:58.942357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:58.959752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:58.959800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:58.959829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:58.978927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:58.978975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:58.979005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:58.999144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:58.999192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:58.999222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.015286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.015333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.015363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.036030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.036078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.036108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.050810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.050858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.050895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.071600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.071641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.071666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.090470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.090527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.090553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.110745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.110793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.110823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.128842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.128889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.128918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.146987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.147035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.147064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.164577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.164632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.164658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.176 [2024-11-17 09:35:59.183538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.176 [2024-11-17 09:35:59.183581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.176 [2024-11-17 09:35:59.183608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.197983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.198031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.198061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.219453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.219509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.219536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.233746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.233794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.233824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.253936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.253985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.254014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.274883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.274932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.274962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.292946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.292994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.293023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.310695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.310744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.310774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.329868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.329916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.329945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.344561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.344616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.344644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.363781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.363828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.363866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.382388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.382458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.382498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.400090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.400139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.400168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.418083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.418131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.418159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.433 [2024-11-17 09:35:59.439616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.433 [2024-11-17 09:35:59.439657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.433 [2024-11-17 09:35:59.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.455147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.455194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.455224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.473334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.473390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.473437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.491654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.491701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.491730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.511036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.511084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.511114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.531520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.531562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.531588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.548327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.548382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.548413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.566915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.566962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.566992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.583980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.584027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.584056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.602165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.602213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.602242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.620716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.620758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.620784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.637328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.637384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.637430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.655297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.655344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.655384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.674118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.674166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.674203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.690 [2024-11-17 09:35:59.693338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.690 [2024-11-17 09:35:59.693394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.690 [2024-11-17 09:35:59.693438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 [2024-11-17 09:35:59.711893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.948 [2024-11-17 09:35:59.711941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.948 [2024-11-17 09:35:59.711970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 [2024-11-17 09:35:59.728888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.948 [2024-11-17 09:35:59.728951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.948 [2024-11-17 09:35:59.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 [2024-11-17 09:35:59.744907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.948 [2024-11-17 09:35:59.744955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.948 [2024-11-17 09:35:59.744984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 13783.00 IOPS, 53.84 MiB/s [2024-11-17T08:35:59.961Z] [2024-11-17 09:35:59.765028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.948 [2024-11-17 09:35:59.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.948 [2024-11-17 09:35:59.765107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 [2024-11-17 09:35:59.786375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.948 [2024-11-17 09:35:59.786446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.948 [2024-11-17 09:35:59.786472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 [2024-11-17 09:35:59.807578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.948 [2024-11-17 09:35:59.807634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.948 [2024-11-17 09:35:59.807686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.948 [2024-11-17 09:35:59.830300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.830349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.830389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.949 [2024-11-17 09:35:59.852578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.852621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.852661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.949 [2024-11-17 09:35:59.868764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.868810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.868836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.949 [2024-11-17 09:35:59.890048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.890096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.890125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.949 [2024-11-17 09:35:59.907076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.907125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.907156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.949 [2024-11-17 09:35:59.926103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.926150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.926181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:54.949 [2024-11-17 09:35:59.944990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:54.949 [2024-11-17 09:35:59.945037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.949 [2024-11-17 09:35:59.945067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:35:59.961789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:35:59.961836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:35:59.961865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:35:59.978892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:35:59.978940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:35:59.978969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:35:59.997593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:35:59.997648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:35:59.997685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.018908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.018996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.019027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.044701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.044781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.044812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.069037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.069089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.069119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.090188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.090238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.090268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.108569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.108613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.126405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.126463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.126488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.144995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.145044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.145076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.164460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.164504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.164531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.183668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.183729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.183759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.207 [2024-11-17 09:36:00.199228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.207 [2024-11-17 09:36:00.199276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.207 [2024-11-17 09:36:00.199307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.218278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.218323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.465 [2024-11-17 09:36:00.218350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.238362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.238452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.465 [2024-11-17 09:36:00.238485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.254873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.254923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.465 [2024-11-17 09:36:00.254953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.275932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.275982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.465 [2024-11-17 09:36:00.276012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.294328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.294387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.465 [2024-11-17 09:36:00.294418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.311569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.311627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.465 [2024-11-17 09:36:00.311651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.465 [2024-11-17 09:36:00.330103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.465 [2024-11-17 09:36:00.330154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.330192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.350119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.350181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.350210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.367671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.367738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.367770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.387318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.387385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.387432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.405309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.405380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.405427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.426166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.426219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.426247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.441468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.441514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.441542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.466 [2024-11-17 09:36:00.460780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.466 [2024-11-17 09:36:00.460842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.466 [2024-11-17 09:36:00.460883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.480203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.480274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.480306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.497526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.497590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.497630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.515821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.515868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.515897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.534142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.534190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.534219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.554177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.554225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.554254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.574982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.575031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.575060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.590725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.590772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.590802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.608337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.608398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.608441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.627222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.627271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.627300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.648236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.648284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.648321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.668481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.668524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.668551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.684129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.684176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.684206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.705280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.705328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.724 [2024-11-17 09:36:00.722299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.724 [2024-11-17 09:36:00.722347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.724 [2024-11-17 09:36:00.722387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.982 [2024-11-17 09:36:00.739620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.982 [2024-11-17 09:36:00.739682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.982 [2024-11-17 09:36:00.739711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.982 13584.00 IOPS, 53.06 MiB/s [2024-11-17T08:36:00.995Z] [2024-11-17 09:36:00.755917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:55.982 [2024-11-17 09:36:00.755965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.982 [2024-11-17 09:36:00.755995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:55.982 00:35:55.982 Latency(us) 00:35:55.982 [2024-11-17T08:36:00.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.983 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:55.983 nvme0n1 : 2.05 13345.14 52.13 0.00 0.00 9393.26 4878.79 49321.91 00:35:55.983 [2024-11-17T08:36:00.996Z] =================================================================================================================== 00:35:55.983 [2024-11-17T08:36:00.996Z] Total : 13345.14 52.13 0.00 0.00 9393.26 4878.79 49321.91 00:35:55.983 { 00:35:55.983 "results": [ 00:35:55.983 { 00:35:55.983 "job": "nvme0n1", 00:35:55.983 "core_mask": "0x2", 00:35:55.983 "workload": "randread", 00:35:55.983 "status": "finished", 00:35:55.983 "queue_depth": 128, 00:35:55.983 "io_size": 4096, 00:35:55.983 "runtime": 2.045389, 00:35:55.983 "iops": 13345.138748668347, 00:35:55.983 "mibps": 52.12944823698573, 00:35:55.983 "io_failed": 0, 00:35:55.983 "io_timeout": 0, 00:35:55.983 "avg_latency_us": 9393.259464200426, 00:35:55.983 "min_latency_us": 4878.791111111111, 00:35:55.983 "max_latency_us": 49321.90814814815 00:35:55.983 } 00:35:55.983 ], 00:35:55.983 "core_count": 1 00:35:55.983 } 00:35:55.983 09:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:55.983 09:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:55.983 09:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:55.983 09:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:55.983 | .driver_specific 00:35:55.983 | .nvme_error 00:35:55.983 | .status_code 00:35:55.983 | .command_transient_transport_error' 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3131773 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3131773 ']' 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3131773 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131773 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131773' 00:35:56.241 killing process with pid 3131773 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3131773 00:35:56.241 Received shutdown signal, test time was about 2.000000 seconds 00:35:56.241 00:35:56.241 Latency(us) 00:35:56.241 [2024-11-17T08:36:01.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.241 [2024-11-17T08:36:01.254Z] =================================================================================================================== 00:35:56.241 [2024-11-17T08:36:01.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:56.241 09:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3131773 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3132430 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3132430 /var/tmp/bperf.sock 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3132430 ']' 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.173 09:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:57.173 [2024-11-17 09:36:02.117229] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:57.173 [2024-11-17 09:36:02.117377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132430 ] 00:35:57.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:57.173 Zero copy mechanism will not be used. 00:35:57.431 [2024-11-17 09:36:02.263792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.431 [2024-11-17 09:36:02.399515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.365 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.931 nvme0n1 00:35:58.931 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:58.931 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.931 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:58.931 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.931 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:58.931 09:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:59.190 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:59.190 Zero copy mechanism will not be used. 00:35:59.190 Running I/O for 2 seconds... 00:35:59.190 [2024-11-17 09:36:03.984545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.190 [2024-11-17 09:36:03.984620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.190 [2024-11-17 09:36:03.984677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.190 [2024-11-17 09:36:03.990992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:03.991037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:03.991065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:03.997995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:03.998056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:03.998087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.004503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.004546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.004573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.011110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.011153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.011180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.018406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.018452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.018480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.025765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.025809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.025852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.032922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.032971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.033001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.039557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.039601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.039628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.043759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.043803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.043870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.050243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.050292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.050323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.057328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.057393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.057422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.064624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.064685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.064716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.071265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.071313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.071344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.077707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.077765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.077793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.085107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.085163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.085191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.092341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.092407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.092436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.099986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.100035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.100065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.109039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.109099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.109125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.117877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.117935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.117962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.126503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.126562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.126590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.135303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.135354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.135397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.144101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.144161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.144192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.152848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.152908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.152935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.161531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.161575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.161602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.170296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.170359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.178964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.191 [2024-11-17 09:36:04.179008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.191 [2024-11-17 09:36:04.179061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.191 [2024-11-17 09:36:04.187767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.192 [2024-11-17 09:36:04.187816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.192 [2024-11-17 09:36:04.187846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.192 [2024-11-17 09:36:04.196346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.192 [2024-11-17 09:36:04.196418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.192 [2024-11-17 09:36:04.196445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.451 [2024-11-17 09:36:04.205032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.451 [2024-11-17 09:36:04.205095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.451 [2024-11-17 09:36:04.205126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.451 [2024-11-17 09:36:04.213748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.451 [2024-11-17 09:36:04.213810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.451 [2024-11-17 09:36:04.213840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.451 [2024-11-17 09:36:04.222499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.451 [2024-11-17 09:36:04.222544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.451 [2024-11-17 09:36:04.222571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.451 [2024-11-17 09:36:04.230294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.451 [2024-11-17 09:36:04.230351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.451 [2024-11-17 09:36:04.230404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.451 [2024-11-17 09:36:04.237474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.237517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.237544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.243591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.243635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.243661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.249571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.249622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.249666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.255476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.255519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.255545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.261894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.261937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.261964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.268309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.268358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.268399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.274967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.275023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.275051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.283088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.283146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.283174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.291390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.291455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.291483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.296869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.296930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.296961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.303733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.303774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.303829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.310999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.311043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.311070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.315252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.315295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.315323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.319507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.319549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.319575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.324729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.324771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.324797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.328489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.328531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.328556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.333177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.333220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.333246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.337811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.337853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.337880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.342838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.342880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.342914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.346941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.347004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.347031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.351823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.351865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.351892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.356187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.356231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.356258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.361727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.361780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.361808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.366928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.366975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.371633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.371705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.371734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.376810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.452 [2024-11-17 09:36:04.376874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.452 [2024-11-17 09:36:04.376902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.452 [2024-11-17 09:36:04.383082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.389760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.389822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.389852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.398572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.398618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.398645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.407799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.407843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.407870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.416599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.416643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.416696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.425499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.425544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.425571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.433265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.433314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.440275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.440338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.440380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.444796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.444844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.444873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.452286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.452338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.452387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.453 [2024-11-17 09:36:04.459034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.453 [2024-11-17 09:36:04.459089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.453 [2024-11-17 09:36:04.459120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.465114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.465161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.465191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.471991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.472038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.472068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.478570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.478613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.478639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.485361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.485415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.485443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.491957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.492013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.492042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.498724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.498772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.498803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.504866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.504922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.504950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.712 [2024-11-17 09:36:04.511899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.712 [2024-11-17 09:36:04.511955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.712 [2024-11-17 09:36:04.511983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.518336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.518393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.518440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.524278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.524320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.524347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.530280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.530328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.530358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.535922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.535968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.535997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.542095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.542143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.542172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.549410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.549468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.549510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.558123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.558167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.558193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.565004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.565046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.565074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.571590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.571642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.571669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.578129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.578188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.578231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.585078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.585127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.591660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.591704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.591731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.598255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.598303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.598333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.605265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.605313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.605343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.610167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.610216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.610246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.616149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.616213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.616243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.623947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.623995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.624026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.631872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.631921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.631951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.639987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.640037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.640067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.647650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.647698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.647728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.654556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.654615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.654641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.661594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.661651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.661677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.668666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.668715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.668744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.675802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.675850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.675880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.682707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.682768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.682798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.690257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.713 [2024-11-17 09:36:04.690314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.713 [2024-11-17 09:36:04.690344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.713 [2024-11-17 09:36:04.699082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.714 [2024-11-17 09:36:04.699148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.714 [2024-11-17 09:36:04.699178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.714 [2024-11-17 09:36:04.707127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.714 [2024-11-17 09:36:04.707171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.714 [2024-11-17 09:36:04.707198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.714 [2024-11-17 09:36:04.715934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.714 [2024-11-17 09:36:04.715978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.714 [2024-11-17 09:36:04.716005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.724097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.724141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.724168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.729920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.729984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.730011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.736544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.736601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.736629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.745530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.745588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.745616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.753998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.754062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.754092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.762976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.763027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.763057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.771287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.771335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.771374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.778122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.778199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.784919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.784966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.784995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.791816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.791865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.791894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.798833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.798881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.806203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.806261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.806299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.813400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.813444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.813470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.820254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.820311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.820341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.827201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.827257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.827284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.834452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.834497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.834524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.841006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.841051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.841078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.848053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.848097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.848123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.974 [2024-11-17 09:36:04.855099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.974 [2024-11-17 09:36:04.855147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.974 [2024-11-17 09:36:04.855177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.861779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.861822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.861850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.865945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.865987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.866031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.870533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.870575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.870602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.874914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.874955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.874981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.879758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.879818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.879845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.884302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.884362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.884398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.890587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.890629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.890672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.896786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.896833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.896863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.903448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.903491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.903517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.910537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.910594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.910634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.917948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.917997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.918028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.925107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.925152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.925203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.932217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.932265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.932295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.939191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.939239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.939270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.946746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.946794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.946823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.953986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.954035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.954065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.961127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.961175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.961206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.967933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.967976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.968004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:59.975 [2024-11-17 09:36:04.973504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.973548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.973574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:59.975 4484.00 IOPS, 560.50 MiB/s [2024-11-17T08:36:04.988Z] [2024-11-17 09:36:04.981846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:35:59.975 [2024-11-17 09:36:04.981897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.975 [2024-11-17 09:36:04.981928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:04.989760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:04.989810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:04.989841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:04.997011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:04.997061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:04.997091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.004657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.004702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.004748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.013149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.013197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.013227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.019844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.019906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.019938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.026103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.026146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.026173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.033805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.033849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.033877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.041260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.041304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.048684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.048736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.048772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.055580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.055623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.055649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.062268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.062311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.062354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.070578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.070622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.070650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.077686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.077752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.077782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.084125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.084167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.084194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.090886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.090930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.090957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.097806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.097868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.097899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.235 [2024-11-17 09:36:05.104732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.235 [2024-11-17 09:36:05.104777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.235 [2024-11-17 09:36:05.104804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.111329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.111393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.111425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.117739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.117783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.117810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.124485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.124529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.124556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.131397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.131440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.131466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.135223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.135280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.135308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.142275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.142324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.142354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.149333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.149398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.149430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.155928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.155972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.155999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.163115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.163163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.163202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.171720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.171802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.179246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.179294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.179324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.186364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.186436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.186464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.193570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.193613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.193640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.200849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.200898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.200928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.207118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.207166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.207195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.213206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.213254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.213283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.219025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.219071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.219101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.225534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.225605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.225633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.232220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.232268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.232297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.238108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.238155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.238185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.236 [2024-11-17 09:36:05.244052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.236 [2024-11-17 09:36:05.244100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.236 [2024-11-17 09:36:05.244129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.250069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.250118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.250147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.256223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.256273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.256314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.262246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.262293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.262324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.268467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.268523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.268551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.274226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.274273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.274312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.279968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.280027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.280053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.286033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.286090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.286132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.291878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.291920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.291947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.297200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.297242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.297268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.301041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.301098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.301129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.306260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.306320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.306351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.312450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.312508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.312536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.318725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.318772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.318803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.324853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.324912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.324942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.330904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.330946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.330990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.337008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.337051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.337078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.343332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.496 [2024-11-17 09:36:05.343405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.496 [2024-11-17 09:36:05.343435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.496 [2024-11-17 09:36:05.349708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.349750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.349776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.356190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.356238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.356268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.363096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.363157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.363188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.370392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.370455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.370482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.377579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.377622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.377658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.384921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.384969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.384999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.392490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.392534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.392561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.399712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.399760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.399790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.406945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.406989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.407017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.414491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.414534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.414561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.421994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.422043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.422073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.429191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.429241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.429271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.435837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.435886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.435917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.442839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.442893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.442936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.450314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.450359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.450397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.457698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.457743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.457770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.465555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.465601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.465629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.473181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.473231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.473270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.481083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.481128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.481171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.488545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.488589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.488616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.496254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.496304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.496334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.497 [2024-11-17 09:36:05.503590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.497 [2024-11-17 09:36:05.503635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.497 [2024-11-17 09:36:05.503698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.511355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.511441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.511472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.517952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.517996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.518024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.522216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.522259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.522286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.527800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.527843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.527870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.532548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.532590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.532617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.537097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.537139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.543208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.543251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.543278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.549180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.549223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.549250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.556575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.556654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.562230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.562274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.562301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.756 [2024-11-17 09:36:05.569247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.756 [2024-11-17 09:36:05.569296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.756 [2024-11-17 09:36:05.569325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.576273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.576314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.576339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.583451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.583496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.583523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.590378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.590450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.590476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.597226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.597268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.597294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.603952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.603994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.604021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.611069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.611117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.611161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.618181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.618229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.618258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.625587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.625630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.625657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.632049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.632097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.632127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.639044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.639092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.639121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.647633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.647707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.647736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.655383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.655445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.655473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.662543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.662586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.662613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.669057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.669100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.669127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.674800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.674849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.674876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.680692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.680733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.680760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.686496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.686537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.686562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.692324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.692382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.692429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.699340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.699408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.699435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.706703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.706752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.714338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.714412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.714440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.721614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.721676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.721707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.729048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.729098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.729129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.737548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.737592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.737619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.745166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.745215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.745244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.752469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.752512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.757 [2024-11-17 09:36:05.752539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.757 [2024-11-17 09:36:05.760032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:00.757 [2024-11-17 09:36:05.760080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.758 [2024-11-17 09:36:05.760111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.767876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.767940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.767968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.774903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.774952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.774982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.782948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.782997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.783027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.791787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.791836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.791866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.800323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.800384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.800432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.808807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.808856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.817594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.817639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.817681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.826971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.827015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.827043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.836144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.836187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.836215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.845138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.845187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.845216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.852744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.852788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.852817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.859276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.859339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.859377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.866621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.866682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.866713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.874106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.874155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.874184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.881243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.881304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.881334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.888277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.888326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.888355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.894555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.894598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.894626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.898536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.898579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.017 [2024-11-17 09:36:05.898605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.017 [2024-11-17 09:36:05.903972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.017 [2024-11-17 09:36:05.904015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.904042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.908785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.908827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.908854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.912440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.912481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.912507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.917595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.917645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.917674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.922476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.922519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.922545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.927277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.927323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.927351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.933109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.933155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.933185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.938880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.938923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.938949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.944614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.944656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.944698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.950690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.950732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.950758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.957219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.957262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.957289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.963569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.963611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.963639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.970234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.970276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.970304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.018 [2024-11-17 09:36:05.978961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:01.018 [2024-11-17 09:36:05.979010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.018 [2024-11-17 09:36:05.979039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.018 4535.50 IOPS, 566.94 MiB/s 00:36:01.018 Latency(us) 00:36:01.018 [2024-11-17T08:36:06.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.018 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:01.018 nvme0n1 : 2.00 4537.35 567.17 0.00 0.00 3518.55 983.04 9514.86 00:36:01.018 [2024-11-17T08:36:06.031Z] =================================================================================================================== 00:36:01.018 [2024-11-17T08:36:06.031Z] Total : 4537.35 567.17 0.00 0.00 3518.55 983.04 9514.86 00:36:01.018 { 00:36:01.018 "results": [ 00:36:01.018 { 00:36:01.018 "job": "nvme0n1", 00:36:01.018 "core_mask": "0x2", 00:36:01.018 "workload": "randread", 00:36:01.018 "status": "finished", 00:36:01.018 "queue_depth": 16, 00:36:01.018 "io_size": 131072, 00:36:01.018 "runtime": 2.003373, 00:36:01.018 "iops": 4537.347762997704, 00:36:01.018 "mibps": 567.168470374713, 00:36:01.018 "io_failed": 0, 00:36:01.018 "io_timeout": 0, 00:36:01.018 "avg_latency_us": 3518.5452758016545, 00:36:01.018 "min_latency_us": 983.04, 00:36:01.018 "max_latency_us": 9514.856296296297 00:36:01.018 } 00:36:01.018 ], 00:36:01.018 "core_count": 1 00:36:01.018 } 00:36:01.018 09:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:01.018 09:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:01.018 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:01.018 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:01.018 | .driver_specific 00:36:01.018 | .nvme_error 00:36:01.018 | .status_code 00:36:01.018 | .command_transient_transport_error' 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 293 > 0 )) 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3132430 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3132430 ']' 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3132430 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:01.276 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132430 00:36:01.534 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:01.534 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:01.534 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132430' 00:36:01.534 killing process with pid 3132430 00:36:01.534 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3132430 00:36:01.534 Received shutdown signal, test time was about 2.000000 seconds 00:36:01.534 00:36:01.534 Latency(us) 00:36:01.534 [2024-11-17T08:36:06.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.534 [2024-11-17T08:36:06.547Z] =================================================================================================================== 00:36:01.534 [2024-11-17T08:36:06.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:01.534 09:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3132430 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3133222 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3133222 /var/tmp/bperf.sock 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3133222 ']' 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:02.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.518 09:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.518 [2024-11-17 09:36:07.282296] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:02.518 [2024-11-17 09:36:07.282446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133222 ] 00:36:02.518 [2024-11-17 09:36:07.425510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.776 [2024-11-17 09:36:07.557974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.342 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.342 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:03.342 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:03.342 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:03.600 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:03.600 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.600 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.600 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.600 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.600 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:04.167 nvme0n1 00:36:04.167 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:04.167 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.167 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.167 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.167 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:04.167 09:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:04.167 Running I/O for 2 seconds... 00:36:04.167 [2024-11-17 09:36:09.100326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:04.167 [2024-11-17 09:36:09.101914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.167 [2024-11-17 09:36:09.101981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:04.167 [2024-11-17 09:36:09.114472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:36:04.167 [2024-11-17 09:36:09.115497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.167 [2024-11-17 09:36:09.115538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:04.167 [2024-11-17 09:36:09.133399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:36:04.167 [2024-11-17 09:36:09.135549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.167 [2024-11-17 09:36:09.135590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:04.167 [2024-11-17 09:36:09.148994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:36:04.167 [2024-11-17 09:36:09.151209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.167 [2024-11-17 09:36:09.151250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:04.167 [2024-11-17 09:36:09.160018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:36:04.167 [2024-11-17 09:36:09.161256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.167 [2024-11-17 09:36:09.161295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.179233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:36:04.425 [2024-11-17 09:36:09.181499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.181542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.191824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:36:04.425 [2024-11-17 09:36:09.193050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.193105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.208049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:36:04.425 [2024-11-17 09:36:09.209313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.209354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.227010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:36:04.425 [2024-11-17 09:36:09.229398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.229449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.238123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:36:04.425 [2024-11-17 09:36:09.239307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.239346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.257322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:36:04.425 [2024-11-17 09:36:09.259412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.259464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.272963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:36:04.425 [2024-11-17 09:36:09.274973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.275013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.284015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:36:04.425 [2024-11-17 09:36:09.285100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.285140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.303870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:36:04.425 [2024-11-17 09:36:09.306040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.306081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.314822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:36:04.425 [2024-11-17 09:36:09.316009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.316064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.330176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:36:04.425 [2024-11-17 09:36:09.331313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.331354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.348570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:36:04.425 [2024-11-17 09:36:09.350421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.350462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.362779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:36:04.425 [2024-11-17 09:36:09.364518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.364560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.379306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:36:04.425 [2024-11-17 09:36:09.381131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.381171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.393086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:36:04.425 [2024-11-17 09:36:09.395313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.395355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.408755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:36:04.425 [2024-11-17 09:36:09.409949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.409988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:04.425 [2024-11-17 09:36:09.422901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:36:04.425 [2024-11-17 09:36:09.424024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.425 [2024-11-17 09:36:09.424062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.438793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:36:04.683 [2024-11-17 09:36:09.440437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.440478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.454170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:36:04.683 [2024-11-17 09:36:09.455705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.455746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.469447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:36:04.683 [2024-11-17 09:36:09.471117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.471157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.488402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:36:04.683 [2024-11-17 09:36:09.490878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.490919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.499604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:36:04.683 [2024-11-17 09:36:09.500894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.500933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.518814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:36:04.683 [2024-11-17 09:36:09.520913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.520954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.531079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:36:04.683 [2024-11-17 09:36:09.532288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.683 [2024-11-17 09:36:09.532327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:04.683 [2024-11-17 09:36:09.547681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:36:04.683 [2024-11-17 09:36:09.548795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.548835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.562049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:36:04.684 [2024-11-17 09:36:09.562934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.562973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.577623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:04.684 [2024-11-17 09:36:09.579120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.579169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.593893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:36:04.684 [2024-11-17 09:36:09.595227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.595282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.608559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:36:04.684 [2024-11-17 09:36:09.610352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.610401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.624207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:36:04.684 [2024-11-17 09:36:09.625906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.625962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.639816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:36:04.684 [2024-11-17 09:36:09.641408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.641447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.657744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:36:04.684 [2024-11-17 09:36:09.660163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.660204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.668856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:36:04.684 [2024-11-17 09:36:09.670188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.670227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:04.684 [2024-11-17 09:36:09.688017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:36:04.684 [2024-11-17 09:36:09.690138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.684 [2024-11-17 09:36:09.690179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:04.942 [2024-11-17 09:36:09.700182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:36:04.942 [2024-11-17 09:36:09.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.942 [2024-11-17 09:36:09.701484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:04.942 [2024-11-17 09:36:09.719062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:36:04.942 [2024-11-17 09:36:09.721201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.942 [2024-11-17 09:36:09.721242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:04.942 [2024-11-17 09:36:09.731132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:36:04.942 [2024-11-17 09:36:09.732426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.942 [2024-11-17 09:36:09.732465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:04.942 [2024-11-17 09:36:09.750013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:36:04.942 [2024-11-17 09:36:09.752116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.942 [2024-11-17 09:36:09.752156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:04.942 [2024-11-17 09:36:09.764169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:36:04.942 [2024-11-17 09:36:09.765718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.942 [2024-11-17 09:36:09.765758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:04.942 [2024-11-17 09:36:09.781262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:36:04.942 [2024-11-17 09:36:09.783607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.942 [2024-11-17 09:36:09.783646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.792374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:36:04.943 [2024-11-17 09:36:09.793490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.793530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.811220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:36:04.943 [2024-11-17 09:36:09.813141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.813183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.825490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:36:04.943 [2024-11-17 09:36:09.827285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.827325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.840996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:36:04.943 [2024-11-17 09:36:09.842579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.842628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.856078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:36:04.943 [2024-11-17 09:36:09.857864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.857904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.871508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:36:04.943 [2024-11-17 09:36:09.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.873226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.887095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:36:04.943 [2024-11-17 09:36:09.888686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.888727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.901333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:36:04.943 [2024-11-17 09:36:09.903003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.903044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.916695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:36:04.943 [2024-11-17 09:36:09.918002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.918043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.932376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:36:04.943 [2024-11-17 09:36:09.934058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:04.943 [2024-11-17 09:36:09.934098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:04.943 [2024-11-17 09:36:09.951240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:36:05.201 [2024-11-17 09:36:09.953790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:09.953831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:09.962671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:36:05.201 [2024-11-17 09:36:09.964047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:09.964087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:09.984126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:36:05.201 [2024-11-17 09:36:09.986794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:09.986838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:09.996235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:36:05.201 [2024-11-17 09:36:09.997667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:09.997706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:10.017364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:36:05.201 [2024-11-17 09:36:10.019476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:10.019524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:10.033729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:36:05.201 [2024-11-17 09:36:10.036334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:10.036402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:10.050115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:36:05.201 [2024-11-17 09:36:10.053102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:10.053151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:05.201 [2024-11-17 09:36:10.068330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:36:05.201 [2024-11-17 09:36:10.070525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.201 [2024-11-17 09:36:10.070570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:05.201 16141.00 IOPS, 63.05 MiB/s [2024-11-17T08:36:10.214Z] [2024-11-17 09:36:10.086947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:36:05.202 [2024-11-17 09:36:10.088721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.088762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.103260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:36:05.202 [2024-11-17 09:36:10.104970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.105015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.120625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:36:05.202 [2024-11-17 09:36:10.122347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.122425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.138450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.202 [2024-11-17 09:36:10.139294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.157051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.202 [2024-11-17 09:36:10.157361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.157429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.175481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.202 [2024-11-17 09:36:10.175829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.175872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.193698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.202 [2024-11-17 09:36:10.194007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.194052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.202 [2024-11-17 09:36:10.211765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.202 [2024-11-17 09:36:10.212041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.202 [2024-11-17 09:36:10.212080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.459 [2024-11-17 09:36:10.229927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.459 [2024-11-17 09:36:10.230232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.459 [2024-11-17 09:36:10.230274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.459 [2024-11-17 09:36:10.248085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.248418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.248458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.266295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.266624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.266662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.284478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.284796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.284845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.302635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.302955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.302998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.320722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.321023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.321064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.338855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.339157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.339199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.356932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.357228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.357286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.375026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.375325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.375374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.393040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.393343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.393394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.411087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.411412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.411449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.429221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.429548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.429586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.447298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.447629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.447665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.460 [2024-11-17 09:36:10.465657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.460 [2024-11-17 09:36:10.465975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.460 [2024-11-17 09:36:10.466017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.483736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.484042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.484085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.501939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.502237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.502280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.520092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.520417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.520456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.538303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.538628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.538681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.556670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.556977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.557020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.574963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.575263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.575305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.593218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.593542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.593587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.611262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.718 [2024-11-17 09:36:10.611603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.718 [2024-11-17 09:36:10.611641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.718 [2024-11-17 09:36:10.629435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.719 [2024-11-17 09:36:10.629776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.719 [2024-11-17 09:36:10.629818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.719 [2024-11-17 09:36:10.647537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.719 [2024-11-17 09:36:10.647854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.719 [2024-11-17 09:36:10.647898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.719 [2024-11-17 09:36:10.665672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.719 [2024-11-17 09:36:10.665974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.719 [2024-11-17 09:36:10.666017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.719 [2024-11-17 09:36:10.683762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.719 [2024-11-17 09:36:10.684066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.719 [2024-11-17 09:36:10.684109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.719 [2024-11-17 09:36:10.701898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.719 [2024-11-17 09:36:10.702199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.719 [2024-11-17 09:36:10.702242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.719 [2024-11-17 09:36:10.719950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.719 [2024-11-17 09:36:10.720251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.719 [2024-11-17 09:36:10.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.737842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.977 [2024-11-17 09:36:10.738148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.977 [2024-11-17 09:36:10.738190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.755945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.977 [2024-11-17 09:36:10.756259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.977 [2024-11-17 09:36:10.756301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.774021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.977 [2024-11-17 09:36:10.774343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.977 [2024-11-17 09:36:10.774396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.792201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.977 [2024-11-17 09:36:10.792549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.977 [2024-11-17 09:36:10.792586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.810293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.977 [2024-11-17 09:36:10.810639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.977 [2024-11-17 09:36:10.810695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.828284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.977 [2024-11-17 09:36:10.828595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.977 [2024-11-17 09:36:10.828638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.977 [2024-11-17 09:36:10.846420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.846736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.846792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.864673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.864989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.865031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.882858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.883163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.883207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.900849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.901153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.901196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.918953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.919255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.919297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.936965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.937267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.937310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.955141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.955475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.955513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:05.978 [2024-11-17 09:36:10.973286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:05.978 [2024-11-17 09:36:10.973603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:05.978 [2024-11-17 09:36:10.973642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 [2024-11-17 09:36:10.991496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:06.235 [2024-11-17 09:36:10.991816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.235 [2024-11-17 09:36:10.991860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 [2024-11-17 09:36:11.009762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:06.235 [2024-11-17 09:36:11.010068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.235 [2024-11-17 09:36:11.010110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 [2024-11-17 09:36:11.027935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:06.235 [2024-11-17 09:36:11.028236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.235 [2024-11-17 09:36:11.028279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 [2024-11-17 09:36:11.046017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:06.235 [2024-11-17 09:36:11.046322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.235 [2024-11-17 09:36:11.046364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 [2024-11-17 09:36:11.064192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:06.235 [2024-11-17 09:36:11.064540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.235 [2024-11-17 09:36:11.064586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 [2024-11-17 09:36:11.082442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:36:06.235 [2024-11-17 09:36:11.083272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.235 [2024-11-17 09:36:11.083315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:06.235 15155.00 IOPS, 59.20 MiB/s 00:36:06.235 Latency(us) 00:36:06.235 [2024-11-17T08:36:11.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:06.235 nvme0n1 : 2.01 15149.30 59.18 0.00 0.00 8425.39 3349.62 20874.43 00:36:06.235 [2024-11-17T08:36:11.248Z] =================================================================================================================== 00:36:06.235 [2024-11-17T08:36:11.248Z] Total : 15149.30 59.18 0.00 0.00 8425.39 3349.62 20874.43 00:36:06.235 { 00:36:06.235 "results": [ 00:36:06.235 { 00:36:06.235 "job": "nvme0n1", 00:36:06.235 "core_mask": "0x2", 00:36:06.235 "workload": "randwrite", 00:36:06.235 "status": "finished", 00:36:06.235 "queue_depth": 128, 00:36:06.235 "io_size": 4096, 00:36:06.235 "runtime": 2.008674, 00:36:06.235 "iops": 15149.297496756566, 00:36:06.235 "mibps": 59.176943346705336, 00:36:06.235 "io_failed": 0, 00:36:06.235 "io_timeout": 0, 00:36:06.235 "avg_latency_us": 8425.394178941347, 00:36:06.235 "min_latency_us": 3349.617777777778, 00:36:06.235 "max_latency_us": 20874.42962962963 00:36:06.235 } 00:36:06.235 ], 00:36:06.235 "core_count": 1 00:36:06.235 } 00:36:06.235 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:06.235 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:06.235 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:06.235 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:06.235 | .driver_specific 00:36:06.235 | .nvme_error 00:36:06.235 | .status_code 00:36:06.235 | .command_transient_transport_error' 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3133222 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3133222 ']' 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3133222 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133222 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133222' 00:36:06.493 killing process with pid 3133222 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3133222 00:36:06.493 Received shutdown signal, test time was about 2.000000 seconds 00:36:06.493 00:36:06.493 Latency(us) 00:36:06.493 [2024-11-17T08:36:11.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.493 [2024-11-17T08:36:11.506Z] =================================================================================================================== 00:36:06.493 [2024-11-17T08:36:11.506Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:06.493 09:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3133222 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3134160 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3134160 /var/tmp/bperf.sock 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3134160 ']' 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:07.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.428 09:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:07.428 [2024-11-17 09:36:12.399131] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:07.428 [2024-11-17 09:36:12.399262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134160 ] 00:36:07.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:07.428 Zero copy mechanism will not be used. 00:36:07.687 [2024-11-17 09:36:12.542264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.687 [2024-11-17 09:36:12.678864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.621 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.621 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:08.621 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:08.621 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:08.878 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:08.878 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.878 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:08.878 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.878 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:08.878 09:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:09.136 nvme0n1 00:36:09.136 09:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:09.136 09:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.136 09:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.136 09:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.136 09:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:09.136 09:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:09.136 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:09.136 Zero copy mechanism will not be used. 00:36:09.136 Running I/O for 2 seconds... 00:36:09.395 [2024-11-17 09:36:14.147901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.395 [2024-11-17 09:36:14.148067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.148119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.156060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.156202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.156249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.163839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.164005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.164049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.171450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.171649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.171706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.178780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.178977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.179021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.186190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.186424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.186465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.193626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.193845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.200989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.201228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.201272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.208604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.208823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.208867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.216797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.216998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.217042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.224425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.224627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.224666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.231623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.231854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.231897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.238931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.239145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.239188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.246127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.246359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.246414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.253549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.253751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.253816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.260710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.260955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.260999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.268046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.268179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.268222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.275490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.275695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.275739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.283872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.284051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.284094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.291596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.291753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.291795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.298848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.299031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.299073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.306423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.306523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.396 [2024-11-17 09:36:14.306572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.396 [2024-11-17 09:36:14.313763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.396 [2024-11-17 09:36:14.313896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.313940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.320909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.321041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.321084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.328025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.328158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.328201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.335500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.335645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.335691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.342919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.343145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.343189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.351675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.351880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.351923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.360338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.360548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.360587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.369154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.369355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.369430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.377788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.377899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.377943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.385658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.385809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.385852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.393538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.393640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.393700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.397 [2024-11-17 09:36:14.401318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.397 [2024-11-17 09:36:14.401465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.397 [2024-11-17 09:36:14.401503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.408738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.408903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.416020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.416137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.416180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.423304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.423454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.423493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.430575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.430720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.430764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.437837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.437980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.438024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.445212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.445343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.445412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.452486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.452608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.452664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.459717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.459826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.459869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.466990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.467133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.467177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.474213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.474356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.474423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.481427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.481526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.481564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.488620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.488756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.488799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.495952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.496084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.496127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.503433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.503549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.503589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.510575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.510722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.510765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.517755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.517861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.524901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.525009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.525052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.532087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.532222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.532266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.539312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.539445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.539488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.546658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.546803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.657 [2024-11-17 09:36:14.546847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.657 [2024-11-17 09:36:14.553869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.657 [2024-11-17 09:36:14.553983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.554026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.561067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.561221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.568450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.568566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.568605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.575557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.575671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.575751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.582791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.582898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.582940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.590156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.590260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.590306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.597800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.597924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.597967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.605205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.605337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.605390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.612786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.612896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.612939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.620485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.620584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.620623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.627973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.628096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.628139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.635330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.635475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.635514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.642578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.642748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.650137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.650248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.650291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.658542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.658768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.658812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.658 [2024-11-17 09:36:14.666523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.658 [2024-11-17 09:36:14.666723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.658 [2024-11-17 09:36:14.666763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.674611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.674814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.674858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.681975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.682182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.682227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.689164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.689414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.689455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.696490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.696748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.703530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.703728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.703780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.710590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.710807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.710851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.717935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.718089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.718132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.725278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.725501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.725540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.732487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.732683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.732757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.739561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.739771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.739815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.746827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.747004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.747047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.754187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.754422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.754461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.761384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.761592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.761630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.768476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.768600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.768639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.775528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.775746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.775789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.918 [2024-11-17 09:36:14.782776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.918 [2024-11-17 09:36:14.782981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.918 [2024-11-17 09:36:14.783024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.790009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.790240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.790283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.797151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.797288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.797331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.804423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.804635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.811564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.811762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.811805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.818824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.819000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.819042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.825958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.826190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.826233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.833000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.833221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.833265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.840062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.840279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.840322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.847058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.847206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.847248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.854136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.854386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.854443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.861274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.861526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.861566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.868598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.868831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.868874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.876465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.876565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.876604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.883995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.884266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.891344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.891553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.891592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.898514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.898706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.898750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.905630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.905860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.905903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.912823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.913047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.913091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.920280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.920459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.920500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:09.919 [2024-11-17 09:36:14.927278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:09.919 [2024-11-17 09:36:14.927494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.919 [2024-11-17 09:36:14.927534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.934893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.935014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.935057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.942029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.942149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.942191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.949026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.949168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.949212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.956121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.956264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.956308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.963281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.963451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.963491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.970657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.970895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.970938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.977938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.978130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.978173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.985206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.985437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.985478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.992460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.992689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.992732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:14.999456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:14.999660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:14.999716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.006602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.006793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.013774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.013956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.014012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.020932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.021140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.021183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.028149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.028386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.028446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.035386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.035597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.035637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.042471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.042688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.042732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.179 [2024-11-17 09:36:15.049487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.179 [2024-11-17 09:36:15.049722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.179 [2024-11-17 09:36:15.049764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.056553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.056772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.056816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.063527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.063763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.063806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.070625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.070859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.070902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.077811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.078036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.085025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.085257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.085301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.092076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.092285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.092328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.099283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.099541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.106362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.106607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.106645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.113385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.113531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.113569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.120546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.120793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.120837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.127574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.127777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.127820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.135166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.135311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.135363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.142280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.142485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.142525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.180 4214.00 IOPS, 526.75 MiB/s [2024-11-17T08:36:15.193Z] [2024-11-17 09:36:15.150805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.151028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.151069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.157881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.158100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.158144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.165484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.165613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.165669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.174361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.174572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.174611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.181888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.182127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.180 [2024-11-17 09:36:15.188915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.180 [2024-11-17 09:36:15.189081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.180 [2024-11-17 09:36:15.189120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.196362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.196641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.203660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.203791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.203835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.212077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.212214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.212258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.219059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.219200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.219243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.226379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.226530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.226569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.234037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.234141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.234182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.241300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.241463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.241502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.248393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.248580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.248619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.255931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.256087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.264396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.264548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.264587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.271561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.271685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.271729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.278656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.278828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.278871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.286296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.286421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.286458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.293628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.293763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.293805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.300814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.300943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.300986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.308073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.308193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.308236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.315315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.315468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.315508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.322792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.322903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.322947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.330435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.330565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.330604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.337791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.337899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.337941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.345186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.440 [2024-11-17 09:36:15.345314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-11-17 09:36:15.345375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.440 [2024-11-17 09:36:15.352558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.352666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.352723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.360045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.360161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.367396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.367516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.367554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.375029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.375162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.375205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.382295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.382435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.382473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.389691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.389808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.389851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.396935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.397045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.397089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.404443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.404540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.404579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.411823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.411927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.411967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.419196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.419309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.426590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.426689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.426747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.433931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.434044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.434087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.441113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.441249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.441293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.441 [2024-11-17 09:36:15.448442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.441 [2024-11-17 09:36:15.448560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.441 [2024-11-17 09:36:15.448600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.455533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.700 [2024-11-17 09:36:15.455657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.700 [2024-11-17 09:36:15.455732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.463043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.700 [2024-11-17 09:36:15.463169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.700 [2024-11-17 09:36:15.463212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.470501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.700 [2024-11-17 09:36:15.470603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.700 [2024-11-17 09:36:15.470643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.477944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.700 [2024-11-17 09:36:15.478057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.700 [2024-11-17 09:36:15.478099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.485454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.700 [2024-11-17 09:36:15.485561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.700 [2024-11-17 09:36:15.485600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.492882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.700 [2024-11-17 09:36:15.492991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.700 [2024-11-17 09:36:15.493036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.700 [2024-11-17 09:36:15.500025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.500187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.507452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.507560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.507599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.515116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.515226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.515268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.522611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.522741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.522784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.529977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.530088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.530131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.537250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.537390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.537446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.544552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.544729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.551812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.551923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.551966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.559512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.559637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.559696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.566860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.566989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.567032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.573947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.574050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.574094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.581139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.581249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.581300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.588356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.588518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.588557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.595823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.595938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.595981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.603920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.604050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.604093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.611488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.611634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.611696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.619593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.619817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.619860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.627924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.628180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.636010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.636125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.636167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.643463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.643622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.643683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.650705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.650840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.650883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.657792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.658000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.658043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.665130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.701 [2024-11-17 09:36:15.665282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.701 [2024-11-17 09:36:15.665325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.701 [2024-11-17 09:36:15.673180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.702 [2024-11-17 09:36:15.673398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.702 [2024-11-17 09:36:15.673451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.702 [2024-11-17 09:36:15.680710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.702 [2024-11-17 09:36:15.680919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.702 [2024-11-17 09:36:15.680963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.702 [2024-11-17 09:36:15.688156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.702 [2024-11-17 09:36:15.688359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.702 [2024-11-17 09:36:15.688426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.702 [2024-11-17 09:36:15.695270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.702 [2024-11-17 09:36:15.695444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.702 [2024-11-17 09:36:15.695483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.702 [2024-11-17 09:36:15.702496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.702 [2024-11-17 09:36:15.702724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.702 [2024-11-17 09:36:15.702766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.702 [2024-11-17 09:36:15.709628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.702 [2024-11-17 09:36:15.709787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.702 [2024-11-17 09:36:15.709830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.716813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.716970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.717013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.724224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.724458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.724498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.731510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.731719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.731759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.738509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.738742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.738782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.745138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.745289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.745341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.752959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.753177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.753219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.761635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.761879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.761922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.769500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.769595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.769633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.778283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.778511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.778558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.786008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.786202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.786246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.794599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.794833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.794876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.803104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.803294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.803338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.810687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.810908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.810951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.819339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.819538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.819578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.827921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.961 [2024-11-17 09:36:15.828175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.961 [2024-11-17 09:36:15.828218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.961 [2024-11-17 09:36:15.835106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.835251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.835294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.842390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.842533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.842571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.849523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.849631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.849680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.856909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.857018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.857061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.864825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.864955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.864998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.873486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.873681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.873740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.881226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.881426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.881466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.888274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.888436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.888475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.895493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.895613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.895669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.902543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.902701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.902744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.910577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.910771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.910821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.918493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.918686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.918746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.925927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.926060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.926101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.933045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.933295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.933339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.940016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.940247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.940291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.947069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.947303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.947346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.953989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.954182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.954225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.961011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.961249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.961292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.962 [2024-11-17 09:36:15.968011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:10.962 [2024-11-17 09:36:15.968250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.962 [2024-11-17 09:36:15.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.221 [2024-11-17 09:36:15.974999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.221 [2024-11-17 09:36:15.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.221 [2024-11-17 09:36:15.975261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.221 [2024-11-17 09:36:15.982142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:15.982388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:15.982444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:15.989223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:15.989475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:15.989514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:15.996466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:15.996700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:15.996743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.003596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.003808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.003851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.010671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.010903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.010945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.017658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.017821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.017865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.025057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.025264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.025307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.032060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.032303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.032354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.039251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.039422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.039463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.046577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.046792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.046835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.053579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.053750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.053792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.061605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.061778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.061821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.068587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.068720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.068762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.075847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.075972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.076016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.083459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.083556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.083596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.090487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.090584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.090623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.098086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.098219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.098263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.105262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.105379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.105436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.112891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.113087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.113130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.120054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.120168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.120210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.128072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.128184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.128226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.135782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.222 [2024-11-17 09:36:16.135892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.222 [2024-11-17 09:36:16.135935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.222 [2024-11-17 09:36:16.142664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.223 [2024-11-17 09:36:16.143020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.223 [2024-11-17 09:36:16.143064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.223 [2024-11-17 09:36:16.149339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:11.223 [2024-11-17 09:36:16.149754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.223 [2024-11-17 09:36:16.149813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.223 4187.00 IOPS, 523.38 MiB/s 00:36:11.223 Latency(us) 00:36:11.223 [2024-11-17T08:36:16.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.223 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:11.223 nvme0n1 : 2.01 4186.64 523.33 0.00 0.00 3810.91 2475.80 8786.68 00:36:11.223 [2024-11-17T08:36:16.236Z] =================================================================================================================== 00:36:11.223 [2024-11-17T08:36:16.236Z] Total : 4186.64 523.33 0.00 0.00 3810.91 2475.80 8786.68 00:36:11.223 { 00:36:11.223 "results": [ 00:36:11.223 { 00:36:11.223 "job": "nvme0n1", 00:36:11.223 "core_mask": "0x2", 00:36:11.223 "workload": "randwrite", 00:36:11.223 "status": "finished", 00:36:11.223 "queue_depth": 16, 00:36:11.223 "io_size": 131072, 00:36:11.223 "runtime": 2.005188, 00:36:11.223 "iops": 4186.639856212983, 00:36:11.223 "mibps": 523.3299820266229, 00:36:11.223 "io_failed": 0, 00:36:11.223 "io_timeout": 0, 00:36:11.223 "avg_latency_us": 3810.9141353098184, 00:36:11.223 "min_latency_us": 2475.8044444444445, 00:36:11.223 "max_latency_us": 8786.678518518518 00:36:11.223 } 00:36:11.223 ], 00:36:11.223 "core_count": 1 00:36:11.223 } 00:36:11.223 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:11.223 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:11.223 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:11.223 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:11.223 | .driver_specific 00:36:11.223 | .nvme_error 00:36:11.223 | .status_code 00:36:11.223 | .command_transient_transport_error' 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 271 > 0 )) 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3134160 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3134160 ']' 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3134160 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134160 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134160' 00:36:11.481 killing process with pid 3134160 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3134160 00:36:11.481 Received shutdown signal, test time was about 2.000000 seconds 00:36:11.481 00:36:11.481 Latency(us) 00:36:11.481 [2024-11-17T08:36:16.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.481 [2024-11-17T08:36:16.494Z] =================================================================================================================== 00:36:11.481 [2024-11-17T08:36:16.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:11.481 09:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3134160 00:36:12.415 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3131620 00:36:12.415 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3131620 ']' 00:36:12.415 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3131620 00:36:12.415 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:12.415 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.415 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131620 00:36:12.674 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:12.674 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:12.674 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131620' 00:36:12.674 killing process with pid 3131620 00:36:12.674 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3131620 00:36:12.674 09:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3131620 00:36:13.609 00:36:13.609 real 0m23.128s 00:36:13.609 user 0m45.406s 00:36:13.609 sys 0m4.685s 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:13.609 ************************************ 00:36:13.609 END TEST nvmf_digest_error 00:36:13.609 ************************************ 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.609 rmmod nvme_tcp 00:36:13.609 rmmod nvme_fabrics 00:36:13.609 rmmod nvme_keyring 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3131620 ']' 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3131620 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3131620 ']' 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3131620 00:36:13.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3131620) - No such process 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3131620 is not found' 00:36:13.609 Process with pid 3131620 is not found 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:13.609 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:13.610 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:13.610 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:13.610 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.610 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.610 09:36:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.144 00:36:16.144 real 0m52.446s 00:36:16.144 user 1m35.031s 00:36:16.144 sys 0m11.017s 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.144 ************************************ 00:36:16.144 END TEST nvmf_digest 00:36:16.144 ************************************ 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.144 ************************************ 00:36:16.144 START TEST nvmf_bdevperf 00:36:16.144 ************************************ 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:16.144 * Looking for test storage... 00:36:16.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.144 --rc genhtml_branch_coverage=1 00:36:16.144 --rc genhtml_function_coverage=1 00:36:16.144 --rc genhtml_legend=1 00:36:16.144 --rc geninfo_all_blocks=1 00:36:16.144 --rc geninfo_unexecuted_blocks=1 00:36:16.144 00:36:16.144 ' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.144 --rc genhtml_branch_coverage=1 00:36:16.144 --rc genhtml_function_coverage=1 00:36:16.144 --rc genhtml_legend=1 00:36:16.144 --rc geninfo_all_blocks=1 00:36:16.144 --rc geninfo_unexecuted_blocks=1 00:36:16.144 00:36:16.144 ' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.144 --rc genhtml_branch_coverage=1 00:36:16.144 --rc genhtml_function_coverage=1 00:36:16.144 --rc genhtml_legend=1 00:36:16.144 --rc geninfo_all_blocks=1 00:36:16.144 --rc geninfo_unexecuted_blocks=1 00:36:16.144 00:36:16.144 ' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.144 --rc genhtml_branch_coverage=1 00:36:16.144 --rc genhtml_function_coverage=1 00:36:16.144 --rc genhtml_legend=1 00:36:16.144 --rc geninfo_all_blocks=1 00:36:16.144 --rc geninfo_unexecuted_blocks=1 00:36:16.144 00:36:16.144 ' 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.144 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:16.145 09:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:18.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:18.049 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:18.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:18.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:18.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:18.050 09:36:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:18.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:36:18.050 00:36:18.050 --- 10.0.0.2 ping statistics --- 00:36:18.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.050 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:18.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:36:18.050 00:36:18.050 --- 10.0.0.1 ping statistics --- 00:36:18.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.050 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:18.050 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3136883 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3136883 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3136883 ']' 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.309 09:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:18.309 [2024-11-17 09:36:23.170410] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:18.309 [2024-11-17 09:36:23.170543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.309 [2024-11-17 09:36:23.318443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:18.566 [2024-11-17 09:36:23.443521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.566 [2024-11-17 09:36:23.443588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.566 [2024-11-17 09:36:23.443609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.566 [2024-11-17 09:36:23.443629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.566 [2024-11-17 09:36:23.443646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.566 [2024-11-17 09:36:23.445939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:18.566 [2024-11-17 09:36:23.446004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.566 [2024-11-17 09:36:23.446008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.499 [2024-11-17 09:36:24.192051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.499 Malloc0 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.499 [2024-11-17 09:36:24.303408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:19.499 { 00:36:19.499 "params": { 00:36:19.499 "name": "Nvme$subsystem", 00:36:19.499 "trtype": "$TEST_TRANSPORT", 00:36:19.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:19.499 "adrfam": "ipv4", 00:36:19.499 "trsvcid": "$NVMF_PORT", 00:36:19.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:19.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:19.499 "hdgst": ${hdgst:-false}, 00:36:19.499 "ddgst": ${ddgst:-false} 00:36:19.499 }, 00:36:19.499 "method": "bdev_nvme_attach_controller" 00:36:19.499 } 00:36:19.499 EOF 00:36:19.499 )") 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:19.499 09:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:19.499 "params": { 00:36:19.499 "name": "Nvme1", 00:36:19.499 "trtype": "tcp", 00:36:19.499 "traddr": "10.0.0.2", 00:36:19.499 "adrfam": "ipv4", 00:36:19.499 "trsvcid": "4420", 00:36:19.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:19.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:19.499 "hdgst": false, 00:36:19.499 "ddgst": false 00:36:19.499 }, 00:36:19.499 "method": "bdev_nvme_attach_controller" 00:36:19.499 }' 00:36:19.499 [2024-11-17 09:36:24.390895] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:19.499 [2024-11-17 09:36:24.391013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137034 ] 00:36:19.757 [2024-11-17 09:36:24.523665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.757 [2024-11-17 09:36:24.649758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.324 Running I/O for 1 seconds... 00:36:21.258 6197.00 IOPS, 24.21 MiB/s 00:36:21.258 Latency(us) 00:36:21.258 [2024-11-17T08:36:26.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:21.258 Verification LBA range: start 0x0 length 0x4000 00:36:21.258 Nvme1n1 : 1.06 5995.99 23.42 0.00 0.00 20453.14 4514.70 47185.92 00:36:21.258 [2024-11-17T08:36:26.271Z] =================================================================================================================== 00:36:21.258 [2024-11-17T08:36:26.271Z] Total : 5995.99 23.42 0.00 0.00 20453.14 4514.70 47185.92 00:36:22.192 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3137315 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:22.193 { 00:36:22.193 "params": { 00:36:22.193 "name": "Nvme$subsystem", 00:36:22.193 "trtype": "$TEST_TRANSPORT", 00:36:22.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.193 "adrfam": "ipv4", 00:36:22.193 "trsvcid": "$NVMF_PORT", 00:36:22.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.193 "hdgst": ${hdgst:-false}, 00:36:22.193 "ddgst": ${ddgst:-false} 00:36:22.193 }, 00:36:22.193 "method": "bdev_nvme_attach_controller" 00:36:22.193 } 00:36:22.193 EOF 00:36:22.193 )") 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:22.193 09:36:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:22.193 "params": { 00:36:22.193 "name": "Nvme1", 00:36:22.193 "trtype": "tcp", 00:36:22.193 "traddr": "10.0.0.2", 00:36:22.193 "adrfam": "ipv4", 00:36:22.193 "trsvcid": "4420", 00:36:22.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:22.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:22.193 "hdgst": false, 00:36:22.193 "ddgst": false 00:36:22.193 }, 00:36:22.193 "method": "bdev_nvme_attach_controller" 00:36:22.193 }' 00:36:22.193 [2024-11-17 09:36:27.025401] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:22.193 [2024-11-17 09:36:27.025544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137315 ] 00:36:22.193 [2024-11-17 09:36:27.160236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.451 [2024-11-17 09:36:27.285828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.016 Running I/O for 15 seconds... 00:36:24.882 6219.00 IOPS, 24.29 MiB/s [2024-11-17T08:36:30.156Z] 6257.50 IOPS, 24.44 MiB/s [2024-11-17T08:36:30.156Z] 09:36:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3136883 00:36:25.143 09:36:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:25.143 [2024-11-17 09:36:29.969259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.143 [2024-11-17 09:36:29.969770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.143 [2024-11-17 09:36:29.969790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.969812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.969832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.969854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.969891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.969915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.969936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.969958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.969978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.970976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.970996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.144 [2024-11-17 09:36:29.971509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.144 [2024-11-17 09:36:29.971528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.971971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.971992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.972980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.972999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.973020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.973038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.973059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.973077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.973097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.973115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.973136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.973155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.145 [2024-11-17 09:36:29.973175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.145 [2024-11-17 09:36:29.973194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.973553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.973971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.973991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:25.146 [2024-11-17 09:36:29.974570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.146 [2024-11-17 09:36:29.974886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.146 [2024-11-17 09:36:29.974908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.147 [2024-11-17 09:36:29.974927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.147 [2024-11-17 09:36:29.974947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.147 [2024-11-17 09:36:29.974965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.147 [2024-11-17 09:36:29.974984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:29.975013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:25.147 [2024-11-17 09:36:29.975030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:25.147 [2024-11-17 09:36:29.975048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104472 len:8 PRP1 0x0 PRP2 0x0 00:36:25.147 [2024-11-17 09:36:29.975065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:25.147 [2024-11-17 09:36:29.979106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:29.979229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:29.980163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:29.980227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:29.980255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:29.980615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:29.980913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:29.980939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:29.980962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:29.980984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:29.995074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:29.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:29.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:29.995662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:29.995999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:29.996337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:29.996392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:29.996415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:29.996438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:30.011106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:30.011648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:30.011694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:30.011722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:30.012063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:30.012419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:30.012452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:30.012475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:30.012497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:30.026687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:30.027199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:30.027247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:30.027275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:30.027628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:30.027971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:30.028001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:30.028024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:30.028046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:30.042222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:30.042755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:30.042798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:30.042825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:30.043183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:30.043538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:30.043570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:30.043593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:30.043615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:30.057940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:30.058473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:30.058516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:30.058543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:30.058886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:30.059229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:30.059260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:30.059282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:30.059303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:30.073646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:30.074153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:30.074194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:30.074220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:30.074581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:30.074927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:30.074957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:30.074980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:30.075001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.147 [2024-11-17 09:36:30.089235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.147 [2024-11-17 09:36:30.089788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.147 [2024-11-17 09:36:30.089831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.147 [2024-11-17 09:36:30.089857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.147 [2024-11-17 09:36:30.090194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.147 [2024-11-17 09:36:30.090547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.147 [2024-11-17 09:36:30.090580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.147 [2024-11-17 09:36:30.090602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.147 [2024-11-17 09:36:30.090624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.148 [2024-11-17 09:36:30.104791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.148 [2024-11-17 09:36:30.105317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.148 [2024-11-17 09:36:30.105353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.148 [2024-11-17 09:36:30.105402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.148 [2024-11-17 09:36:30.105752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.148 [2024-11-17 09:36:30.106111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.148 [2024-11-17 09:36:30.106142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.148 [2024-11-17 09:36:30.106165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.148 [2024-11-17 09:36:30.106186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.148 [2024-11-17 09:36:30.120385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.148 [2024-11-17 09:36:30.120915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.148 [2024-11-17 09:36:30.120957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.148 [2024-11-17 09:36:30.120983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.148 [2024-11-17 09:36:30.121319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.148 [2024-11-17 09:36:30.121672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.148 [2024-11-17 09:36:30.121709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.148 [2024-11-17 09:36:30.121732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.148 [2024-11-17 09:36:30.121754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.148 [2024-11-17 09:36:30.135884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.148 [2024-11-17 09:36:30.136399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.148 [2024-11-17 09:36:30.136440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.148 [2024-11-17 09:36:30.136466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.148 [2024-11-17 09:36:30.136804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.148 [2024-11-17 09:36:30.137142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.148 [2024-11-17 09:36:30.137173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.148 [2024-11-17 09:36:30.137194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.148 [2024-11-17 09:36:30.137216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.148 [2024-11-17 09:36:30.151351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.407 [2024-11-17 09:36:30.151879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.407 [2024-11-17 09:36:30.151921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.407 [2024-11-17 09:36:30.151947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.407 [2024-11-17 09:36:30.152284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.407 [2024-11-17 09:36:30.152635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.407 [2024-11-17 09:36:30.152667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.407 [2024-11-17 09:36:30.152690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.407 [2024-11-17 09:36:30.152712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.407 [2024-11-17 09:36:30.166810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.407 [2024-11-17 09:36:30.167334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.407 [2024-11-17 09:36:30.167386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.407 [2024-11-17 09:36:30.167414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.167750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.168088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.168119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.168141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.168172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.182298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.182817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.182858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.182884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.183220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.183603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.183635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.183658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.183680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.197797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.198309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.198350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.198389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.198728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.199067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.199098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.199120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.199142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.213441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.213960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.214003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.214030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.214380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.214719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.214749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.214772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.214810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.228942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.229452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.229495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.229522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.229859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.230197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.230228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.230250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.230272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.244381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.244883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.244924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.244950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.245286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.245637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.245669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.245692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.245713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.259829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.260306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.260347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.260382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.260722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.261059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.261090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.261112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.261134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.275336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.275816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.275858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.275890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.276229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.276588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.276620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.276643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.276667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.290915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.291499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.291542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.291569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.291908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.292247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.292278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.292300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.292322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.408 [2024-11-17 09:36:30.306553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.408 [2024-11-17 09:36:30.307071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.408 [2024-11-17 09:36:30.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.408 [2024-11-17 09:36:30.307141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.408 [2024-11-17 09:36:30.307497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.408 [2024-11-17 09:36:30.307836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.408 [2024-11-17 09:36:30.307867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.408 [2024-11-17 09:36:30.307890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.408 [2024-11-17 09:36:30.307911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.322019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.322552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.322594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.322621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.322958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.323302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.323333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.323356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.323390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.337508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.338033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.338074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.338101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.338450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.338798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.338828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.338850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.338872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.353004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.353568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.353593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.353943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.354281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.354312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.354334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.354355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.368522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.369049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.369091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.369118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.369477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.369816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.369853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.369877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.369899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.383984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.384503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.384545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.384571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.384906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.385243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.385274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.385296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.385318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.399442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.399969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.400011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.400038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.400383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.400733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.400765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.400788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.400809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.409 [2024-11-17 09:36:30.414922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.409 [2024-11-17 09:36:30.415413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.409 [2024-11-17 09:36:30.415456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.409 [2024-11-17 09:36:30.415483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.409 [2024-11-17 09:36:30.415819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.409 [2024-11-17 09:36:30.416156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.409 [2024-11-17 09:36:30.416186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.409 [2024-11-17 09:36:30.416208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.409 [2024-11-17 09:36:30.416236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.430334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.430846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.430888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.430914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.431249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.431599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.431632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.431654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.431676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.445766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.446257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.446299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.446326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.446676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.447015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.447046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.447069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.447090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.461169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.461699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.461741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.461768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.462102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.462452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.462485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.462507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.462529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.476573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.477087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.477129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.477155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.477504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.477841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.477871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.477894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.477916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.492005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.492537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.492579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.492606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.492941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.493276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.493307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.493329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.493350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.507436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.507923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.507964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.507991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.508326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.508674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.508705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.508728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.508750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.522824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.523326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.523378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.523415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.523753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.669 [2024-11-17 09:36:30.524090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.669 [2024-11-17 09:36:30.524121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.669 [2024-11-17 09:36:30.524143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.669 [2024-11-17 09:36:30.524165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.669 [2024-11-17 09:36:30.538214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.669 [2024-11-17 09:36:30.538741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.669 [2024-11-17 09:36:30.538783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.669 [2024-11-17 09:36:30.538810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.669 [2024-11-17 09:36:30.539145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.539497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.539529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.539552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.539573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.553623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.554127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.554168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.554195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.554542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.554879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.554911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.554933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.554955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.569023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.569548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.569590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.569617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.569952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.570295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.570327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.570350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.570383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.584544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.585060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.585101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.585128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.585495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.585834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.585865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.585887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.585909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.599991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.600484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.600526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.600553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.600889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.601235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.601269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.601291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.601312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.615387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.615911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.615952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.615979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.616313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.616660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.616692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.616721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.616744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.630853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.631351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.631403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.631440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.631775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.632111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.632142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.632165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.632186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.646324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.646823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.646864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.646891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.647264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.647615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.647647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.647669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.647691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.661827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.662366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.662421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.662461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.662801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.663140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.670 [2024-11-17 09:36:30.663170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.670 [2024-11-17 09:36:30.663192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.670 [2024-11-17 09:36:30.663214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.670 [2024-11-17 09:36:30.677396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.670 [2024-11-17 09:36:30.677905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.670 [2024-11-17 09:36:30.677946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.670 [2024-11-17 09:36:30.677972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.670 [2024-11-17 09:36:30.678307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.670 [2024-11-17 09:36:30.678657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.930 [2024-11-17 09:36:30.678689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.930 [2024-11-17 09:36:30.678712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.930 [2024-11-17 09:36:30.678733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.930 [2024-11-17 09:36:30.692927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.930 [2024-11-17 09:36:30.693433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.930 [2024-11-17 09:36:30.693476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.930 [2024-11-17 09:36:30.693502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.930 [2024-11-17 09:36:30.693837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.930 [2024-11-17 09:36:30.694172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.930 [2024-11-17 09:36:30.694203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.930 [2024-11-17 09:36:30.694226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.930 [2024-11-17 09:36:30.694248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.930 [2024-11-17 09:36:30.708393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.930 [2024-11-17 09:36:30.708909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.930 [2024-11-17 09:36:30.708951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.930 [2024-11-17 09:36:30.708978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.930 [2024-11-17 09:36:30.709312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.930 [2024-11-17 09:36:30.709660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.930 [2024-11-17 09:36:30.709692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.930 [2024-11-17 09:36:30.709714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.930 [2024-11-17 09:36:30.709736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.930 [2024-11-17 09:36:30.723875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.930 [2024-11-17 09:36:30.724362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.724420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.724448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.724784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.725121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.725152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.725174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.725196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.739291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.739809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.739852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.739878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.740214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.740566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.740598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.740620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.740643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.754690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.755170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.755211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.755237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.755593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.755929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.755961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.755983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.756005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.770058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.770568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.770609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.770635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.770975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.771311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.771343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.771365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.771403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.785498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.785991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.786032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.786058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.786429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.786765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.786796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.786818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.786840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.800893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.801392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.801434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.801460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.801796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.802132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.802164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.802186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.802207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.816278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.816802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.816845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.816871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.817207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.817557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.817596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.817625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.817648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.831746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.832241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.832283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.832310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.832655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.832992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.833023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.833045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.833066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.847146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.847689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.847729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.847755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.848089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.848435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.848467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.848490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.931 [2024-11-17 09:36:30.848516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.931 [2024-11-17 09:36:30.862566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.931 [2024-11-17 09:36:30.863163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.931 [2024-11-17 09:36:30.863222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.931 [2024-11-17 09:36:30.863248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.931 [2024-11-17 09:36:30.863593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.931 [2024-11-17 09:36:30.863929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.931 [2024-11-17 09:36:30.863959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.931 [2024-11-17 09:36:30.863988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.932 [2024-11-17 09:36:30.864011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.932 [2024-11-17 09:36:30.879868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.932 4342.33 IOPS, 16.96 MiB/s [2024-11-17T08:36:30.945Z] [2024-11-17 09:36:30.880398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.932 [2024-11-17 09:36:30.880448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.932 [2024-11-17 09:36:30.880478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.932 [2024-11-17 09:36:30.880812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.932 [2024-11-17 09:36:30.881165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.932 [2024-11-17 09:36:30.881195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.932 [2024-11-17 09:36:30.881217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.932 [2024-11-17 09:36:30.881238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.932 [2024-11-17 09:36:30.895330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.932 [2024-11-17 09:36:30.895820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.932 [2024-11-17 09:36:30.895862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.932 [2024-11-17 09:36:30.895888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.932 [2024-11-17 09:36:30.896223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.932 [2024-11-17 09:36:30.896572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.932 [2024-11-17 09:36:30.896604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.932 [2024-11-17 09:36:30.896626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.932 [2024-11-17 09:36:30.896649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.932 [2024-11-17 09:36:30.910726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.932 [2024-11-17 09:36:30.911238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.932 [2024-11-17 09:36:30.911280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.932 [2024-11-17 09:36:30.911306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.932 [2024-11-17 09:36:30.911653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.932 [2024-11-17 09:36:30.911990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.932 [2024-11-17 09:36:30.912021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.932 [2024-11-17 09:36:30.912043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.932 [2024-11-17 09:36:30.912065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:25.932 [2024-11-17 09:36:30.926127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:25.932 [2024-11-17 09:36:30.926647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:25.932 [2024-11-17 09:36:30.926688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:25.932 [2024-11-17 09:36:30.926714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:25.932 [2024-11-17 09:36:30.927048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:25.932 [2024-11-17 09:36:30.927399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:25.932 [2024-11-17 09:36:30.927431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:25.932 [2024-11-17 09:36:30.927453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:25.932 [2024-11-17 09:36:30.927475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:30.941528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:30.942038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:30.942080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:30.942106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:30.942455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:30.942791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:30.942822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:30.942844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.192 [2024-11-17 09:36:30.942865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:30.956935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:30.957494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:30.957536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:30.957562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:30.957896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:30.958233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:30.958263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:30.958285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.192 [2024-11-17 09:36:30.958307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:30.972399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:30.972902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:30.972943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:30.972974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:30.973310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:30.973658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:30.973689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:30.973712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.192 [2024-11-17 09:36:30.973733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:30.987809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:30.988324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:30.988366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:30.988404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:30.988740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:30.989076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:30.989107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:30.989129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.192 [2024-11-17 09:36:30.989150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:31.003193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:31.003723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:31.003766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:31.003792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:31.004127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:31.004476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:31.004508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:31.004530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.192 [2024-11-17 09:36:31.004552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:31.018568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:31.019049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:31.019090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:31.019116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:31.019469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:31.019807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:31.019837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:31.019860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.192 [2024-11-17 09:36:31.019881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.192 [2024-11-17 09:36:31.033922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.192 [2024-11-17 09:36:31.034442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.192 [2024-11-17 09:36:31.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.192 [2024-11-17 09:36:31.034511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.192 [2024-11-17 09:36:31.034846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.192 [2024-11-17 09:36:31.035184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.192 [2024-11-17 09:36:31.035214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.192 [2024-11-17 09:36:31.035237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.035258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.049319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.049801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.049842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.049868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.050201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.050551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.050582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.050605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.050627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.064711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.065218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.065259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.065286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.065634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.065971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.066008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.066031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.066053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.080118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.080600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.080649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.080675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.081009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.081345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.081387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.081411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.081433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.095522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.096042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.096083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.096110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.096457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.096801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.096832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.096896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.096919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.111051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.111581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.111624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.111651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.111987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.112325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.112356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.112390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.112430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.126553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.127031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.127072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.127099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.127449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.127787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.127818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.127841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.127862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.141968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.142460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.142502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.142529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.142865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.143202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.143233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.143255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.143277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.157379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.157868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.157909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.157935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.158271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.158619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.158651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.158673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.158694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.172761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.173250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.173291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.173318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.173663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.174001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.193 [2024-11-17 09:36:31.174032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.193 [2024-11-17 09:36:31.174054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.193 [2024-11-17 09:36:31.174075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.193 [2024-11-17 09:36:31.188203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.193 [2024-11-17 09:36:31.188690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.193 [2024-11-17 09:36:31.188731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.193 [2024-11-17 09:36:31.188757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.193 [2024-11-17 09:36:31.189093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.193 [2024-11-17 09:36:31.189443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.194 [2024-11-17 09:36:31.189475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.194 [2024-11-17 09:36:31.189498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.194 [2024-11-17 09:36:31.189519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.203619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.204144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.204185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.204211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.204560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.204899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.204929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.204957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.204994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.219143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.219606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.219648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.219681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.220018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.220355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.220396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.220420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.220442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.234701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.235223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.235264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.235290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.235640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.235978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.236010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.236032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.236054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.250230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.250722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.250765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.250791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.251130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.251492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.251525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.251547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.251569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.265761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.266270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.266311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.266337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.266687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.267035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.267067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.267089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.267111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.281326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.281838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.281880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.281905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.282242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.282594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.282626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.282649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.282671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.296867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.297349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.297399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.297426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.297763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.298100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.298131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.298152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.298174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.312279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.312801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.312843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.312869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.313204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.313555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.313587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.313616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.313639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.327775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.454 [2024-11-17 09:36:31.328292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.454 [2024-11-17 09:36:31.328334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.454 [2024-11-17 09:36:31.328360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.454 [2024-11-17 09:36:31.328722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.454 [2024-11-17 09:36:31.329060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.454 [2024-11-17 09:36:31.329091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.454 [2024-11-17 09:36:31.329113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.454 [2024-11-17 09:36:31.329134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.454 [2024-11-17 09:36:31.343267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.343751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.343793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.343819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.344158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.344510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.344542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.344565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.344587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.358734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.359232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.359274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.359300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.359651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.359990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.360020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.360042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.360064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.374184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.374700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.374742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.374768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.375107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.375456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.375488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.375511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.375533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.389652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.390153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.390195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.390222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.390570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.390908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.390939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.390961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.390982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.405085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.405597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.405640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.405666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.406002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.406338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.406380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.406406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.406428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.420634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.421132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.421173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.421200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.421550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.421890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.421920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.421943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.421964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.436127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.436657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.436698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.436725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.437062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.437422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.437453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.437475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.437507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.455 [2024-11-17 09:36:31.451657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.455 [2024-11-17 09:36:31.452175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.455 [2024-11-17 09:36:31.452216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.455 [2024-11-17 09:36:31.452243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.455 [2024-11-17 09:36:31.452589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.455 [2024-11-17 09:36:31.452930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.455 [2024-11-17 09:36:31.452960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.455 [2024-11-17 09:36:31.452983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.455 [2024-11-17 09:36:31.453005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.467126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.467639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.467681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.467708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.468050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.468399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.468431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.468454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.468476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.482621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.483126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.483168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.483193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.483545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.483883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.483914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.483937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.483958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.498184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.498685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.498726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.498753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.499089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.499441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.499473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.499496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.499519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.513728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.514209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.514251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.514278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.514644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.514990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.515021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.515044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.515066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.529306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.529802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.529844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.529870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.530208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.530571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.530603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.530625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.530647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.544775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.545356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.545705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.546045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.546075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.546098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.546119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.560327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.560843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.560884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.560910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.561247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.561598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.561630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.561659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.561682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.575907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.576423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.576465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.576491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.576830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.577170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.577200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.716 [2024-11-17 09:36:31.577222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.716 [2024-11-17 09:36:31.577244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.716 [2024-11-17 09:36:31.591487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.716 [2024-11-17 09:36:31.592002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.716 [2024-11-17 09:36:31.592043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.716 [2024-11-17 09:36:31.592069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.716 [2024-11-17 09:36:31.592419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.716 [2024-11-17 09:36:31.592762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.716 [2024-11-17 09:36:31.592793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.592815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.592836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.606998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.607551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.607594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.607621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.607958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.608298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.608329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.608351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.608382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.622504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.622994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.623035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.623061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.623411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.623748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.623787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.623810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.623831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.637959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.638441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.638483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.638510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.638845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.639184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.639215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.639236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.639258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.653416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.653929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.653971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.653997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.654334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.654683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.654715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.654737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.654758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.668884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.669394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.669469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.669805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.670144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.670175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.670197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.670219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.684337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.684855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.684897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.684923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.685259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.685609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.685642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.685664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.685685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.699806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.700315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.700356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.700393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.700730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.701069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.701100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.701123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.701144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.717 [2024-11-17 09:36:31.715231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.717 [2024-11-17 09:36:31.715743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.717 [2024-11-17 09:36:31.715785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.717 [2024-11-17 09:36:31.715812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.717 [2024-11-17 09:36:31.716155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.717 [2024-11-17 09:36:31.716505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.717 [2024-11-17 09:36:31.716536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.717 [2024-11-17 09:36:31.716559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.717 [2024-11-17 09:36:31.716581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.977 [2024-11-17 09:36:31.730717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.977 [2024-11-17 09:36:31.731211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.731253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.731279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.731627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.731966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.731997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.732019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.732040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.746205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.746733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.746776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.746802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.747139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.747495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.747527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.747549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.747571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.761710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.762227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.762268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.762294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.762660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.762998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.763035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.763059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.763080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.777219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.777754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.777796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.777822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.778158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.778507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.778539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.778562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.778584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.792787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.793292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.793333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.793359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.793706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.794045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.794076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.794098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.794121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.808320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.808823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.808864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.808891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.809227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.809576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.809608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.809631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.809657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.823790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.824290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.824332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.824359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.824708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.825047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.825078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.825100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.825122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.838979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.839489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.839513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.839838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.840144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.840170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.840188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.840205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.853778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.854230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.854287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.854310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.854639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.854939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.854964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.978 [2024-11-17 09:36:31.854983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.978 [2024-11-17 09:36:31.855000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.978 [2024-11-17 09:36:31.868452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.978 [2024-11-17 09:36:31.868905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.978 [2024-11-17 09:36:31.868956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.978 [2024-11-17 09:36:31.868980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.978 [2024-11-17 09:36:31.869310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.978 [2024-11-17 09:36:31.869628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.978 [2024-11-17 09:36:31.869657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.869695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.869714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 3256.75 IOPS, 12.72 MiB/s [2024-11-17T08:36:31.992Z] [2024-11-17 09:36:31.884541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.885030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.885082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.885109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.885465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.885800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.885826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.885845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.885869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.899139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.899606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.899643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.899667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.899992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.900270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.900295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.900315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.900332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.913748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.914237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.914274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.914304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.914638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.914950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.914975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.914994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.915011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.928309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.928919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.928956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.928980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.929304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.929637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.929665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.929699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.929717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.942948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.943493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.943530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.943553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.943881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.944159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.944185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.944203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.944221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.957515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.958025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.958087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.958459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.958769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.958794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.958812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.958830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.972021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.979 [2024-11-17 09:36:31.972482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.979 [2024-11-17 09:36:31.972534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:26.979 [2024-11-17 09:36:31.972559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:26.979 [2024-11-17 09:36:31.972878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:26.979 [2024-11-17 09:36:31.973154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.979 [2024-11-17 09:36:31.973192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.979 [2024-11-17 09:36:31.973211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.979 [2024-11-17 09:36:31.973228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.979 [2024-11-17 09:36:31.987135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:31.987630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:31.987667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:31.987691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:31.988017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:31.988324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:31.988351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:31.988397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:31.988417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.001850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.002321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.002382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.002408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.002752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.003031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.003061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.003081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.003098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.016280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.016802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.016840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.016864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.017189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.017535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.017563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.017598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.017618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.030974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.031445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.031483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.031506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.031840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.032148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.032174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.032192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.032209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.045477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.045937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.046013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.046335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.046664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.046706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.046727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.046754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.059945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.060425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.060463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.060487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.060816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.061148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.061176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.061196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.061216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.074805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.075286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.075323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.075347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.075671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.075949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.075974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.075992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.076010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.089463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.089904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.089956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.089979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.090298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.090637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.090681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.090701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.240 [2024-11-17 09:36:32.090719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.240 [2024-11-17 09:36:32.104126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.240 [2024-11-17 09:36:32.104608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.240 [2024-11-17 09:36:32.104660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.240 [2024-11-17 09:36:32.104684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.240 [2024-11-17 09:36:32.105009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.240 [2024-11-17 09:36:32.105286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.240 [2024-11-17 09:36:32.105311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.240 [2024-11-17 09:36:32.105330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.105362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.118600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.119161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.119199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.119239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.119568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.119889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.119929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.119948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.119965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.133249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.133731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.133767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.133807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.134131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.134460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.134504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.134524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.134558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.147895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.148353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.148411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.148443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.148783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.149061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.149086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.149104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.149122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.162357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.162880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.162916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.162938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.163253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.163596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.163624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.163644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.163663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.176846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.177320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.177357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.177390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.177719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.178012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.178038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.178056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.178088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.191396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.191958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.191995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.192020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.192381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.192710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.192752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.192771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.192788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.206041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.206544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.206582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.206606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.206929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.207207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.207232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.207250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.207268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.220495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.221003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.221040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.221063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.221430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.221753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.221779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.221797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.221814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.241 [2024-11-17 09:36:32.234987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.241 [2024-11-17 09:36:32.235514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.241 [2024-11-17 09:36:32.235552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.241 [2024-11-17 09:36:32.235576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.241 [2024-11-17 09:36:32.235898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.241 [2024-11-17 09:36:32.236175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.241 [2024-11-17 09:36:32.236200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.241 [2024-11-17 09:36:32.236223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.241 [2024-11-17 09:36:32.236241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.250199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.250701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.250755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.250794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.251092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.251394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.251422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.251455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.251474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.264844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.265313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.265365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.265400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.265743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.266021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.266046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.266064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.266082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.279285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.279740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.279792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.279816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.280139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.280462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.280505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.280524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.280558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.293848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.294300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.294352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.294386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.294730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.295008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.295033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.295052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.295069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.308274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.308835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.308873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.308896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.309217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.309532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.309576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.309595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.309628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.322888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.323415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.323453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.323476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.323806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.324101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.324127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.324147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.324165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.337835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.338305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.338347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.338379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.338699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.338992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.339018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.339037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.339054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.352635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.353059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.353110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.353134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.353492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.353840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.353866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.353884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.502 [2024-11-17 09:36:32.353902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.502 [2024-11-17 09:36:32.367051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.502 [2024-11-17 09:36:32.367535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.502 [2024-11-17 09:36:32.367574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.502 [2024-11-17 09:36:32.367598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.502 [2024-11-17 09:36:32.367923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.502 [2024-11-17 09:36:32.368202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.502 [2024-11-17 09:36:32.368227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.502 [2024-11-17 09:36:32.368245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.368263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.381726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.382212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.382250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.382273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.382607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.382921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.382947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.382966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.382984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.396290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.396865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.396903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.396927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.397249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.397596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.397624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.397644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.397663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.410771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.411316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.411354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.411389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.411716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.412011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.412037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.412055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.412073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.425254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.425706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.425759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.425783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.426108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.426433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.426461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.426481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.426500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.439785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.440204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.440240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.440262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.440639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.440952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.440977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.440995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.441013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.454215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.454692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.454730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.454754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.455094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.455406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.455450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.455471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.455490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.468701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.469169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.469205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.469228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.469555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.469872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.469912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.469936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.469954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.483263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.483745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.483797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.483822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.484148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.484469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.484497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.484517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.484536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.503 [2024-11-17 09:36:32.497845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.503 [2024-11-17 09:36:32.498314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.503 [2024-11-17 09:36:32.498351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.503 [2024-11-17 09:36:32.498384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.503 [2024-11-17 09:36:32.498715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.503 [2024-11-17 09:36:32.498993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.503 [2024-11-17 09:36:32.499018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.503 [2024-11-17 09:36:32.499036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.503 [2024-11-17 09:36:32.499054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.763 [2024-11-17 09:36:32.512945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.763 [2024-11-17 09:36:32.513407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.763 [2024-11-17 09:36:32.513461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.763 [2024-11-17 09:36:32.513485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.763 [2024-11-17 09:36:32.513809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.763 [2024-11-17 09:36:32.514086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.763 [2024-11-17 09:36:32.514111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.763 [2024-11-17 09:36:32.514129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.763 [2024-11-17 09:36:32.514147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.763 [2024-11-17 09:36:32.527415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.763 [2024-11-17 09:36:32.527936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.763 [2024-11-17 09:36:32.527973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.763 [2024-11-17 09:36:32.527997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.763 [2024-11-17 09:36:32.528325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.763 [2024-11-17 09:36:32.528646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.763 [2024-11-17 09:36:32.528689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.763 [2024-11-17 09:36:32.528707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.763 [2024-11-17 09:36:32.528726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.542987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.543499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.543541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.543568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.543904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.544242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.544273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.544296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.544318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.558467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.558991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.559032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.559059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.559408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.559746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.559777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.559799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.559821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.574031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.574527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.574574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.574601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.574936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.575274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.575305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.575327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.575349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.589509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.590084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.590125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.590152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.590504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.590842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.590872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.590894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.590916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.605100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.605583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.605624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.605664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.606006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.606351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.606392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.606427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.606449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.620573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.621072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.621113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.621140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.621497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.621836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.621867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.621890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.621912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.636015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.636536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.636578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.636604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.636943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.637280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.637311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.637334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.637356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.651591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.652136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.652178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.652204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.652565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.652909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.652941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.652963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.652985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.667183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.667705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.667747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.667774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.668111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.668469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.764 [2024-11-17 09:36:32.668515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.764 [2024-11-17 09:36:32.668539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.764 [2024-11-17 09:36:32.668560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.764 [2024-11-17 09:36:32.682766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.764 [2024-11-17 09:36:32.683300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.764 [2024-11-17 09:36:32.683342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.764 [2024-11-17 09:36:32.683379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.764 [2024-11-17 09:36:32.683727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.764 [2024-11-17 09:36:32.684070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.765 [2024-11-17 09:36:32.684100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.765 [2024-11-17 09:36:32.684123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.765 [2024-11-17 09:36:32.684145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.765 [2024-11-17 09:36:32.698352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.765 [2024-11-17 09:36:32.698959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.765 [2024-11-17 09:36:32.699019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.765 [2024-11-17 09:36:32.699045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.765 [2024-11-17 09:36:32.699393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.765 [2024-11-17 09:36:32.699731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.765 [2024-11-17 09:36:32.699761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.765 [2024-11-17 09:36:32.699783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.765 [2024-11-17 09:36:32.699805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.765 [2024-11-17 09:36:32.713907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.765 [2024-11-17 09:36:32.714414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.765 [2024-11-17 09:36:32.714456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.765 [2024-11-17 09:36:32.714483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.765 [2024-11-17 09:36:32.714818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.765 [2024-11-17 09:36:32.715153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.765 [2024-11-17 09:36:32.715184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.765 [2024-11-17 09:36:32.715207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.765 [2024-11-17 09:36:32.715237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.765 [2024-11-17 09:36:32.729338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.765 [2024-11-17 09:36:32.729865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.765 [2024-11-17 09:36:32.729907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.765 [2024-11-17 09:36:32.729933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.765 [2024-11-17 09:36:32.730267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.765 [2024-11-17 09:36:32.730618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.765 [2024-11-17 09:36:32.730649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.765 [2024-11-17 09:36:32.730671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.765 [2024-11-17 09:36:32.730693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.765 [2024-11-17 09:36:32.744742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.765 [2024-11-17 09:36:32.745336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.765 [2024-11-17 09:36:32.745404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.765 [2024-11-17 09:36:32.745431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.765 [2024-11-17 09:36:32.745766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.765 [2024-11-17 09:36:32.746103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.765 [2024-11-17 09:36:32.746134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.765 [2024-11-17 09:36:32.746156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.765 [2024-11-17 09:36:32.746177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.765 [2024-11-17 09:36:32.760250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.765 [2024-11-17 09:36:32.760778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.765 [2024-11-17 09:36:32.760819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:27.765 [2024-11-17 09:36:32.760845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:27.765 [2024-11-17 09:36:32.761181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:27.765 [2024-11-17 09:36:32.761531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.765 [2024-11-17 09:36:32.761563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.765 [2024-11-17 09:36:32.761586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.765 [2024-11-17 09:36:32.761608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.775690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.776287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.776328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.776355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.776704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.777041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.777072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.777094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.777116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.791220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.791748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.791789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.791815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.792151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.792502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.792534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.792556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.792577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.806658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.807221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.807264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.807290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.807639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.807977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.808008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.808030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.808051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.822125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.822643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.822684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.822719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.823056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.823421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.823454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.823476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.823498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.837565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.838066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.838108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.838137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.838484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.838829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.838860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.838883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.838905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.852999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.853528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.853569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.853596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.853932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.854267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.854299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.854322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.854345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.868439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.868939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.868980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.869006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.869340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.869694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.869726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.869749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.869770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 2605.40 IOPS, 10.18 MiB/s [2024-11-17T08:36:33.038Z] [2024-11-17 09:36:32.885684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.886159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.886202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.886228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.886574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.025 [2024-11-17 09:36:32.886911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.025 [2024-11-17 09:36:32.886942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.025 [2024-11-17 09:36:32.886964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.025 [2024-11-17 09:36:32.886986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.025 [2024-11-17 09:36:32.901111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.025 [2024-11-17 09:36:32.901673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.025 [2024-11-17 09:36:32.901716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.025 [2024-11-17 09:36:32.901743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.025 [2024-11-17 09:36:32.902077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.902427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.902460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.902482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.902504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:32.916576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:32.917065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:32.917106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:32.917133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:32.917479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.917821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.917859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.917883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.917906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:32.931988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:32.932488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:32.932529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:32.932556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:32.932892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.933235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.933265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.933288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.933309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:32.947395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:32.947896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:32.947937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:32.947963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:32.948297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.948645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.948678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.948700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.948722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3136883 Killed "${NVMF_APP[@]}" "$@" 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3138008 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3138008 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3138008 ']' 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.026 09:36:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.026 [2024-11-17 09:36:32.962784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:32.963285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:32.963327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:32.963352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:32.963700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.964038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.964069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.964091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.964113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:32.978213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:32.978710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:32.978751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:32.978777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:32.979114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.979467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.979499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.979522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.979544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:32.993958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:32.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:32.994599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:32.994629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:32.994978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:32.995327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:32.995360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:32.995408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:32.995435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:33.009587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:33.010186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:33.010234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:33.010263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:33.010850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:33.011195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:33.011228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.026 [2024-11-17 09:36:33.011252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.026 [2024-11-17 09:36:33.011277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.026 [2024-11-17 09:36:33.025059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.026 [2024-11-17 09:36:33.025570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.026 [2024-11-17 09:36:33.025612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.026 [2024-11-17 09:36:33.025639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.026 [2024-11-17 09:36:33.025979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.026 [2024-11-17 09:36:33.026321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.026 [2024-11-17 09:36:33.026352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.027 [2024-11-17 09:36:33.026387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.027 [2024-11-17 09:36:33.026412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.286 [2024-11-17 09:36:33.040725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.286 [2024-11-17 09:36:33.041245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.286 [2024-11-17 09:36:33.041287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.286 [2024-11-17 09:36:33.041314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.286 [2024-11-17 09:36:33.041665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.286 [2024-11-17 09:36:33.042008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.286 [2024-11-17 09:36:33.042039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.286 [2024-11-17 09:36:33.042079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.286 [2024-11-17 09:36:33.042102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.286 [2024-11-17 09:36:33.054172] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:28.286 [2024-11-17 09:36:33.054290] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.286 [2024-11-17 09:36:33.055528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.286 [2024-11-17 09:36:33.056077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.286 [2024-11-17 09:36:33.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.286 [2024-11-17 09:36:33.056137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.286 [2024-11-17 09:36:33.056461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.286 [2024-11-17 09:36:33.056764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.286 [2024-11-17 09:36:33.056790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.286 [2024-11-17 09:36:33.056825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.286 [2024-11-17 09:36:33.056843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.286 [2024-11-17 09:36:33.070259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.286 [2024-11-17 09:36:33.070788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.286 [2024-11-17 09:36:33.070839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.286 [2024-11-17 09:36:33.070864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.286 [2024-11-17 09:36:33.071189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.286 [2024-11-17 09:36:33.071509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.286 [2024-11-17 09:36:33.071551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.286 [2024-11-17 09:36:33.071570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.286 [2024-11-17 09:36:33.071604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.286 [2024-11-17 09:36:33.085009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.286 [2024-11-17 09:36:33.085511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.286 [2024-11-17 09:36:33.085550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.286 [2024-11-17 09:36:33.085575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.286 [2024-11-17 09:36:33.085920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.286 [2024-11-17 09:36:33.086233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.286 [2024-11-17 09:36:33.086259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.286 [2024-11-17 09:36:33.086278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.286 [2024-11-17 09:36:33.086308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.286 [2024-11-17 09:36:33.099928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.286 [2024-11-17 09:36:33.100430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.286 [2024-11-17 09:36:33.100469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.286 [2024-11-17 09:36:33.100494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.286 [2024-11-17 09:36:33.100828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.286 [2024-11-17 09:36:33.101175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.286 [2024-11-17 09:36:33.101203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.286 [2024-11-17 09:36:33.101224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.286 [2024-11-17 09:36:33.101243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.114792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.115266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.115317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.115342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.115669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.115970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.115997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.116016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.116034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.129669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.130118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.130168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.130190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.130527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.130830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.130856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.130875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.130893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.144497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.144969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.145020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.145044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.145395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.145702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.145728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.145762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.145781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.160140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.160643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.160681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.160704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.161048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.161417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.161446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.161466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.161485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.175707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.176299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.176336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.176360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.176697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.176976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.177002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.177020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.177038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.191271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.191847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.191885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.191915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.192277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.192604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.192632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.192650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.192668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.206720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.207310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.207378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.207698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.207980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.208006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.208024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.208042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.214483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:28.287 [2024-11-17 09:36:33.222060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.222629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.222667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.222691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.223034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.223386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.223433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.223453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.223472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.237629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.238397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.238448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.238476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.238839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.239200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.287 [2024-11-17 09:36:33.239235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.287 [2024-11-17 09:36:33.239261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.287 [2024-11-17 09:36:33.239288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.287 [2024-11-17 09:36:33.253114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.287 [2024-11-17 09:36:33.253692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.287 [2024-11-17 09:36:33.253743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.287 [2024-11-17 09:36:33.253767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.287 [2024-11-17 09:36:33.254104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.287 [2024-11-17 09:36:33.254471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.288 [2024-11-17 09:36:33.254499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.288 [2024-11-17 09:36:33.254519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.288 [2024-11-17 09:36:33.254537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.288 [2024-11-17 09:36:33.268616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.288 [2024-11-17 09:36:33.269086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.288 [2024-11-17 09:36:33.269123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.288 [2024-11-17 09:36:33.269147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.288 [2024-11-17 09:36:33.269503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.288 [2024-11-17 09:36:33.269785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.288 [2024-11-17 09:36:33.269810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.288 [2024-11-17 09:36:33.269829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.288 [2024-11-17 09:36:33.269847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.288 [2024-11-17 09:36:33.284130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.288 [2024-11-17 09:36:33.284680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.288 [2024-11-17 09:36:33.284732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.288 [2024-11-17 09:36:33.284772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.288 [2024-11-17 09:36:33.285130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.288 [2024-11-17 09:36:33.285497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.288 [2024-11-17 09:36:33.285532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.288 [2024-11-17 09:36:33.285552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.288 [2024-11-17 09:36:33.285571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.547 [2024-11-17 09:36:33.299917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.547 [2024-11-17 09:36:33.300435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.547 [2024-11-17 09:36:33.300474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.547 [2024-11-17 09:36:33.300499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.547 [2024-11-17 09:36:33.300861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.547 [2024-11-17 09:36:33.301173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.547 [2024-11-17 09:36:33.301200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.547 [2024-11-17 09:36:33.301234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.547 [2024-11-17 09:36:33.301258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.547 [2024-11-17 09:36:33.315439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.547 [2024-11-17 09:36:33.315940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.547 [2024-11-17 09:36:33.315976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.547 [2024-11-17 09:36:33.316000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.547 [2024-11-17 09:36:33.316335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.547 [2024-11-17 09:36:33.316681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.547 [2024-11-17 09:36:33.316723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.547 [2024-11-17 09:36:33.316746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.547 [2024-11-17 09:36:33.316769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.547 [2024-11-17 09:36:33.330845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.547 [2024-11-17 09:36:33.331426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.547 [2024-11-17 09:36:33.331464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.547 [2024-11-17 09:36:33.331488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.547 [2024-11-17 09:36:33.331839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.547 [2024-11-17 09:36:33.332191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.547 [2024-11-17 09:36:33.332223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.547 [2024-11-17 09:36:33.332246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.547 [2024-11-17 09:36:33.332277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.547 [2024-11-17 09:36:33.346295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.547 [2024-11-17 09:36:33.346809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.547 [2024-11-17 09:36:33.346861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.547 [2024-11-17 09:36:33.346885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.547 [2024-11-17 09:36:33.347239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.547 [2024-11-17 09:36:33.347583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.347611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.347631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.347650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.353870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.548 [2024-11-17 09:36:33.353920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.548 [2024-11-17 09:36:33.353944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.548 [2024-11-17 09:36:33.353969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.548 [2024-11-17 09:36:33.353988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.548 [2024-11-17 09:36:33.356789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:28.548 [2024-11-17 09:36:33.356839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.548 [2024-11-17 09:36:33.356844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:28.548 [2024-11-17 09:36:33.361434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.362049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.362090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.362117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.362488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.362810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.362838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.362860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.362885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.376507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.377177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.377225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.377254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.377614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.377928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.377957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.377981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.378005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.391462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.391986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.392023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.392047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.392411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.392765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.392792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.392811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.392830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.406294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.406799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.406838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.406863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.407197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.407544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.407576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.407597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.407617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.421064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.421557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.421594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.421618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.421950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.422238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.422270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.422291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.422310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.436210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.436929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.436978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.437006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.437330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.437679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.437707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.437732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.437758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.451256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.451951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.452003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.452032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.452403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.452769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.452798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.452821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.452847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.466323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.467096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.467145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.467174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.467529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.467850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.548 [2024-11-17 09:36:33.467879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.548 [2024-11-17 09:36:33.467908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.548 [2024-11-17 09:36:33.467933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.548 [2024-11-17 09:36:33.481339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.548 [2024-11-17 09:36:33.481899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.548 [2024-11-17 09:36:33.481951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.548 [2024-11-17 09:36:33.481975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.548 [2024-11-17 09:36:33.482311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.548 [2024-11-17 09:36:33.482658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.549 [2024-11-17 09:36:33.482687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.549 [2024-11-17 09:36:33.482708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.549 [2024-11-17 09:36:33.482727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.549 [2024-11-17 09:36:33.496316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.549 [2024-11-17 09:36:33.496842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.549 [2024-11-17 09:36:33.496879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.549 [2024-11-17 09:36:33.496903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.549 [2024-11-17 09:36:33.497239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.549 [2024-11-17 09:36:33.497571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.549 [2024-11-17 09:36:33.497601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.549 [2024-11-17 09:36:33.497622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.549 [2024-11-17 09:36:33.497642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.549 [2024-11-17 09:36:33.511282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.549 [2024-11-17 09:36:33.511794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.549 [2024-11-17 09:36:33.511832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.549 [2024-11-17 09:36:33.511857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.549 [2024-11-17 09:36:33.512192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.549 [2024-11-17 09:36:33.512525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.549 [2024-11-17 09:36:33.512573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.549 [2024-11-17 09:36:33.512595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.549 [2024-11-17 09:36:33.512615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.549 [2024-11-17 09:36:33.526246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.549 [2024-11-17 09:36:33.526734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.549 [2024-11-17 09:36:33.526772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.549 [2024-11-17 09:36:33.526796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.549 [2024-11-17 09:36:33.527129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.549 [2024-11-17 09:36:33.527465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.549 [2024-11-17 09:36:33.527495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.549 [2024-11-17 09:36:33.527515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.549 [2024-11-17 09:36:33.527535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.549 [2024-11-17 09:36:33.541313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.549 [2024-11-17 09:36:33.541877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.549 [2024-11-17 09:36:33.541916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.549 [2024-11-17 09:36:33.541940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.549 [2024-11-17 09:36:33.542269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.549 [2024-11-17 09:36:33.542598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.549 [2024-11-17 09:36:33.542627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.549 [2024-11-17 09:36:33.542647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.549 [2024-11-17 09:36:33.542681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.549 [2024-11-17 09:36:33.556667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.549 [2024-11-17 09:36:33.557156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.549 [2024-11-17 09:36:33.557195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.549 [2024-11-17 09:36:33.557218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.808 [2024-11-17 09:36:33.557533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.808 [2024-11-17 09:36:33.557864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.808 [2024-11-17 09:36:33.557891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.808 [2024-11-17 09:36:33.557911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.808 [2024-11-17 09:36:33.557931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.808 [2024-11-17 09:36:33.571547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.808 [2024-11-17 09:36:33.572090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.808 [2024-11-17 09:36:33.572127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.808 [2024-11-17 09:36:33.572157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.808 [2024-11-17 09:36:33.572508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.808 [2024-11-17 09:36:33.572841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.808 [2024-11-17 09:36:33.572867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.808 [2024-11-17 09:36:33.572887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.808 [2024-11-17 09:36:33.572906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.808 [2024-11-17 09:36:33.586541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.808 [2024-11-17 09:36:33.587202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.808 [2024-11-17 09:36:33.587252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.808 [2024-11-17 09:36:33.587280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.808 [2024-11-17 09:36:33.587614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.808 [2024-11-17 09:36:33.587927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.808 [2024-11-17 09:36:33.587955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.808 [2024-11-17 09:36:33.587978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.808 [2024-11-17 09:36:33.588002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.808 [2024-11-17 09:36:33.601580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.808 [2024-11-17 09:36:33.602216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.808 [2024-11-17 09:36:33.602263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.602291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.602629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.602941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.602968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.602991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.603014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.616519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.617016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.617054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.617078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.617435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.617744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.617773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.617794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.617813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.631510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.632049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.632086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.632109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.632468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.632780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.632806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.632826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.632845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.646446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.646986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.647023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.647047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.647409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.647749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.647775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.647794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.647813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.661240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.661769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.661807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.661831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.662164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.662484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.662517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.662538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.662557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.676222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.676667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.676705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.676729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.677059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.677360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.677397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.677418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.677438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.691129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.691618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.691665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.691690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.692039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.692330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.692380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.692404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.692425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.706238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.706730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.706783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.706807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.707145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.707446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.707472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.707508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.809 [2024-11-17 09:36:33.707533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.809 [2024-11-17 09:36:33.721149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.809 [2024-11-17 09:36:33.721662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.809 [2024-11-17 09:36:33.721700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.809 [2024-11-17 09:36:33.721728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.809 [2024-11-17 09:36:33.722062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.809 [2024-11-17 09:36:33.722376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.809 [2024-11-17 09:36:33.722404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.809 [2024-11-17 09:36:33.722441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.722462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.810 [2024-11-17 09:36:33.736237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.810 [2024-11-17 09:36:33.736778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.810 [2024-11-17 09:36:33.736815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.810 [2024-11-17 09:36:33.736839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.810 [2024-11-17 09:36:33.737173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.810 [2024-11-17 09:36:33.737490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.810 [2024-11-17 09:36:33.737519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.810 [2024-11-17 09:36:33.737539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.737557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.810 [2024-11-17 09:36:33.751077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.810 [2024-11-17 09:36:33.751546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.810 [2024-11-17 09:36:33.751585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.810 [2024-11-17 09:36:33.751609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.810 [2024-11-17 09:36:33.751939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.810 [2024-11-17 09:36:33.752226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.810 [2024-11-17 09:36:33.752253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.810 [2024-11-17 09:36:33.752272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.752290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.810 [2024-11-17 09:36:33.765927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.810 [2024-11-17 09:36:33.766419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.810 [2024-11-17 09:36:33.766457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.810 [2024-11-17 09:36:33.766480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.810 [2024-11-17 09:36:33.766807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.810 [2024-11-17 09:36:33.767093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.810 [2024-11-17 09:36:33.767119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.810 [2024-11-17 09:36:33.767138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.767156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.810 [2024-11-17 09:36:33.780802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.810 [2024-11-17 09:36:33.781278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.810 [2024-11-17 09:36:33.781316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.810 [2024-11-17 09:36:33.781340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.810 [2024-11-17 09:36:33.781681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.810 [2024-11-17 09:36:33.781992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.810 [2024-11-17 09:36:33.782018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.810 [2024-11-17 09:36:33.782037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.782056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.810 [2024-11-17 09:36:33.795801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.810 [2024-11-17 09:36:33.796264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.810 [2024-11-17 09:36:33.796301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.810 [2024-11-17 09:36:33.796324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.810 [2024-11-17 09:36:33.796659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.810 [2024-11-17 09:36:33.796962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.810 [2024-11-17 09:36:33.796989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.810 [2024-11-17 09:36:33.797007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.797026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.810 [2024-11-17 09:36:33.810640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.810 [2024-11-17 09:36:33.811073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.810 [2024-11-17 09:36:33.811110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:28.810 [2024-11-17 09:36:33.811139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:28.810 [2024-11-17 09:36:33.811493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:28.810 [2024-11-17 09:36:33.811804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.810 [2024-11-17 09:36:33.811831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.810 [2024-11-17 09:36:33.811850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.810 [2024-11-17 09:36:33.811868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.070 [2024-11-17 09:36:33.825528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.070 [2024-11-17 09:36:33.826045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.070 [2024-11-17 09:36:33.826082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.070 [2024-11-17 09:36:33.826106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.070 [2024-11-17 09:36:33.826464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.070 [2024-11-17 09:36:33.826800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.070 [2024-11-17 09:36:33.826826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.070 [2024-11-17 09:36:33.826845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.070 [2024-11-17 09:36:33.826863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.070 [2024-11-17 09:36:33.840493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.070 [2024-11-17 09:36:33.840959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.070 [2024-11-17 09:36:33.840996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.070 [2024-11-17 09:36:33.841020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.070 [2024-11-17 09:36:33.841347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.070 [2024-11-17 09:36:33.841665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.070 [2024-11-17 09:36:33.841708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.070 [2024-11-17 09:36:33.841727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.070 [2024-11-17 09:36:33.841746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.070 [2024-11-17 09:36:33.855341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.070 [2024-11-17 09:36:33.855889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.070 [2024-11-17 09:36:33.855926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.070 [2024-11-17 09:36:33.855950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.070 [2024-11-17 09:36:33.856283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.070 [2024-11-17 09:36:33.856628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.070 [2024-11-17 09:36:33.856672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.070 [2024-11-17 09:36:33.856692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.070 [2024-11-17 09:36:33.856712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.070 [2024-11-17 09:36:33.870314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.070 [2024-11-17 09:36:33.870799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.070 [2024-11-17 09:36:33.870837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.070 [2024-11-17 09:36:33.870860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.070 [2024-11-17 09:36:33.871194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.070 [2024-11-17 09:36:33.871511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.070 [2024-11-17 09:36:33.871540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.070 [2024-11-17 09:36:33.871560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.070 [2024-11-17 09:36:33.871580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.070 [2024-11-17 09:36:33.885109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.070 2171.17 IOPS, 8.48 MiB/s [2024-11-17T08:36:34.083Z] [2024-11-17 09:36:33.887224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.070 [2024-11-17 09:36:33.887262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.070 [2024-11-17 09:36:33.887286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.070 [2024-11-17 09:36:33.887612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.070 [2024-11-17 09:36:33.887918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.070 [2024-11-17 09:36:33.887945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.070 [2024-11-17 09:36:33.887964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.070 [2024-11-17 09:36:33.887982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.070 [2024-11-17 09:36:33.900016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.070 [2024-11-17 09:36:33.900457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.070 [2024-11-17 09:36:33.900495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.070 [2024-11-17 09:36:33.900519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.070 [2024-11-17 09:36:33.900846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.070 [2024-11-17 09:36:33.901131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.070 [2024-11-17 09:36:33.901172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.070 [2024-11-17 09:36:33.901192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.901211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:33.914749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:33.915151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:33.915203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:33.915227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:33.915538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:33.915866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:33.915893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:33.915912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.915930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:33.929471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:33.929910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:33.929947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:33.929971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:33.930297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:33.930615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:33.930643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:33.930677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.930697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:33.944321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:33.944760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:33.944797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:33.944821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:33.945148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:33.945465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:33.945493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:33.945512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.945536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:33.959172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:33.959618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:33.959655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:33.959679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:33.960007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:33.960292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:33.960319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:33.960338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.960357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:33.974070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:33.974513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:33.974550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:33.974574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:33.974903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:33.975187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:33.975213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:33.975233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.975251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:33.989027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:33.989485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:33.989522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:33.989545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:33.989860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:33.990169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:33.990195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:33.990214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:33.990233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:34.003968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:34.004426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:34.004473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:34.004499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:34.004828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:34.005114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:34.005140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:34.005159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:34.005177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:34.018843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:34.019269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:34.019308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:34.019332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:34.019658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:34.019960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:34.019986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:34.020005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:34.020024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:34.033658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:34.034135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:34.034172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:34.034195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:34.034520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.071 [2024-11-17 09:36:34.034826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.071 [2024-11-17 09:36:34.034853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.071 [2024-11-17 09:36:34.034872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.071 [2024-11-17 09:36:34.034890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.071 [2024-11-17 09:36:34.048586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.071 [2024-11-17 09:36:34.049084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.071 [2024-11-17 09:36:34.049122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.071 [2024-11-17 09:36:34.049152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.071 [2024-11-17 09:36:34.049484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.072 [2024-11-17 09:36:34.049813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.072 [2024-11-17 09:36:34.049841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.072 [2024-11-17 09:36:34.049860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.072 [2024-11-17 09:36:34.049880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.072 [2024-11-17 09:36:34.063833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.072 [2024-11-17 09:36:34.064388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.072 [2024-11-17 09:36:34.064426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.072 [2024-11-17 09:36:34.064451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.072 [2024-11-17 09:36:34.064777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.072 [2024-11-17 09:36:34.065078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.072 [2024-11-17 09:36:34.065111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.072 [2024-11-17 09:36:34.065129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.072 [2024-11-17 09:36:34.065148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.072 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.072 [2024-11-17 09:36:34.075874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.072 [2024-11-17 09:36:34.079030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.072 [2024-11-17 09:36:34.079470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.072 [2024-11-17 09:36:34.079509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.072 [2024-11-17 09:36:34.079533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.072 [2024-11-17 09:36:34.079880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.331 [2024-11-17 09:36:34.080189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.331 [2024-11-17 09:36:34.080217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.331 [2024-11-17 09:36:34.080244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.331 [2024-11-17 09:36:34.080265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.331 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.331 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:29.331 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.331 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.331 [2024-11-17 09:36:34.094079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.331 [2024-11-17 09:36:34.094605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.331 [2024-11-17 09:36:34.094649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.331 [2024-11-17 09:36:34.094676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.331 [2024-11-17 09:36:34.095004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.331 [2024-11-17 09:36:34.095298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.331 [2024-11-17 09:36:34.095325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.331 [2024-11-17 09:36:34.095344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.331 [2024-11-17 09:36:34.095397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.331 [2024-11-17 09:36:34.109026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.331 [2024-11-17 09:36:34.109499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.331 [2024-11-17 09:36:34.109538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.331 [2024-11-17 09:36:34.109562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.331 [2024-11-17 09:36:34.109906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.331 [2024-11-17 09:36:34.110197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.331 [2024-11-17 09:36:34.110223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.331 [2024-11-17 09:36:34.110243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.331 [2024-11-17 09:36:34.110277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.331 [2024-11-17 09:36:34.124250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.331 [2024-11-17 09:36:34.124908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.331 [2024-11-17 09:36:34.124955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.331 [2024-11-17 09:36:34.124983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.331 [2024-11-17 09:36:34.125337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.332 [2024-11-17 09:36:34.125668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.332 [2024-11-17 09:36:34.125716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.332 [2024-11-17 09:36:34.125741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.332 [2024-11-17 09:36:34.125776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.332 [2024-11-17 09:36:34.139324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.332 [2024-11-17 09:36:34.139854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.332 [2024-11-17 09:36:34.139893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.332 [2024-11-17 09:36:34.139917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.332 [2024-11-17 09:36:34.140252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.332 [2024-11-17 09:36:34.140587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.332 [2024-11-17 09:36:34.140616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.332 [2024-11-17 09:36:34.140636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.332 [2024-11-17 09:36:34.140684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.332 [2024-11-17 09:36:34.154552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.332 [2024-11-17 09:36:34.155076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.332 [2024-11-17 09:36:34.155113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.332 [2024-11-17 09:36:34.155146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.332 [2024-11-17 09:36:34.155512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.332 [2024-11-17 09:36:34.155837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.332 [2024-11-17 09:36:34.155864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.332 [2024-11-17 09:36:34.155883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.332 [2024-11-17 09:36:34.155901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.332 [2024-11-17 09:36:34.169547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.332 [2024-11-17 09:36:34.170082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.332 [2024-11-17 09:36:34.170120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.332 [2024-11-17 09:36:34.170143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.332 [2024-11-17 09:36:34.170511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.332 [2024-11-17 09:36:34.170822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.332 [2024-11-17 09:36:34.170850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.332 [2024-11-17 09:36:34.170869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.332 [2024-11-17 09:36:34.170892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.332 Malloc0 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.332 [2024-11-17 09:36:34.184749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.332 [2024-11-17 09:36:34.185233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.332 [2024-11-17 09:36:34.185274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:29.332 [2024-11-17 09:36:34.185300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:36:29.332 [2024-11-17 09:36:34.185621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:29.332 [2024-11-17 09:36:34.185954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.332 [2024-11-17 09:36:34.185988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.332 [2024-11-17 09:36:34.186024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.332 [2024-11-17 09:36:34.186060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.332 [2024-11-17 09:36:34.198451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.332 [2024-11-17 09:36:34.200174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.332 09:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3137315 00:36:29.591 [2024-11-17 09:36:34.400937] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:31.124 2276.00 IOPS, 8.89 MiB/s [2024-11-17T08:36:37.133Z] 2763.25 IOPS, 10.79 MiB/s [2024-11-17T08:36:38.068Z] 3142.67 IOPS, 12.28 MiB/s [2024-11-17T08:36:39.002Z] 3444.80 IOPS, 13.46 MiB/s [2024-11-17T08:36:39.937Z] 3691.82 IOPS, 14.42 MiB/s [2024-11-17T08:36:41.314Z] 3892.33 IOPS, 15.20 MiB/s [2024-11-17T08:36:42.250Z] 4057.31 IOPS, 15.85 MiB/s [2024-11-17T08:36:43.186Z] 4207.86 IOPS, 16.44 MiB/s [2024-11-17T08:36:43.186Z] 4340.27 IOPS, 16.95 MiB/s 00:36:38.173 Latency(us) 00:36:38.173 [2024-11-17T08:36:43.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:38.173 Verification LBA range: start 0x0 length 0x4000 00:36:38.173 Nvme1n1 : 15.01 4343.62 16.97 8132.31 0.00 10227.09 1231.83 37476.88 00:36:38.173 [2024-11-17T08:36:43.186Z] =================================================================================================================== 00:36:38.173 [2024-11-17T08:36:43.186Z] Total : 4343.62 16.97 8132.31 0.00 10227.09 1231.83 37476.88 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.108 rmmod nvme_tcp 00:36:39.108 rmmod nvme_fabrics 00:36:39.108 rmmod nvme_keyring 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3138008 ']' 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3138008 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3138008 ']' 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3138008 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138008 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138008' 00:36:39.108 killing process with pid 3138008 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3138008 00:36:39.108 09:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3138008 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:40.484 09:36:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:42.387 00:36:42.387 real 0m26.514s 00:36:42.387 user 1m12.969s 00:36:42.387 sys 0m4.532s 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.387 ************************************ 00:36:42.387 END TEST nvmf_bdevperf 00:36:42.387 ************************************ 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.387 ************************************ 00:36:42.387 START TEST nvmf_target_disconnect 00:36:42.387 ************************************ 00:36:42.387 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:42.388 * Looking for test storage... 00:36:42.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.388 --rc genhtml_branch_coverage=1 00:36:42.388 --rc genhtml_function_coverage=1 00:36:42.388 --rc genhtml_legend=1 00:36:42.388 --rc geninfo_all_blocks=1 00:36:42.388 --rc geninfo_unexecuted_blocks=1 00:36:42.388 00:36:42.388 ' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.388 --rc genhtml_branch_coverage=1 00:36:42.388 --rc genhtml_function_coverage=1 00:36:42.388 --rc genhtml_legend=1 00:36:42.388 --rc geninfo_all_blocks=1 00:36:42.388 --rc geninfo_unexecuted_blocks=1 00:36:42.388 00:36:42.388 ' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.388 --rc genhtml_branch_coverage=1 00:36:42.388 --rc genhtml_function_coverage=1 00:36:42.388 --rc genhtml_legend=1 00:36:42.388 --rc geninfo_all_blocks=1 00:36:42.388 --rc geninfo_unexecuted_blocks=1 00:36:42.388 00:36:42.388 ' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:42.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.388 --rc genhtml_branch_coverage=1 00:36:42.388 --rc genhtml_function_coverage=1 00:36:42.388 --rc genhtml_legend=1 00:36:42.388 --rc geninfo_all_blocks=1 00:36:42.388 --rc geninfo_unexecuted_blocks=1 00:36:42.388 00:36:42.388 ' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:42.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.388 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:42.389 09:36:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:44.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.920 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:44.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:44.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:44.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:44.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:36:44.921 00:36:44.921 --- 10.0.0.2 ping statistics --- 00:36:44.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.921 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:36:44.921 00:36:44.921 --- 10.0.0.1 ping statistics --- 00:36:44.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.921 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:44.921 ************************************ 00:36:44.921 START TEST nvmf_target_disconnect_tc1 00:36:44.921 ************************************ 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:44.921 [2024-11-17 09:36:49.853249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.921 [2024-11-17 09:36:49.853376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:36:44.921 [2024-11-17 09:36:49.853471] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:44.921 [2024-11-17 09:36:49.853504] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:44.921 [2024-11-17 09:36:49.853530] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:44.921 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:44.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:44.921 Initializing NVMe Controllers 00:36:44.921 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:44.922 00:36:44.922 real 0m0.231s 00:36:44.922 user 0m0.106s 00:36:44.922 sys 0m0.121s 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:44.922 ************************************ 00:36:44.922 END TEST nvmf_target_disconnect_tc1 00:36:44.922 ************************************ 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.922 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:45.180 ************************************ 00:36:45.180 START TEST nvmf_target_disconnect_tc2 00:36:45.180 ************************************ 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3141398 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3141398 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3141398 ']' 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.180 09:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:45.181 [2024-11-17 09:36:50.041541] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:45.181 [2024-11-17 09:36:50.041711] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:45.439 [2024-11-17 09:36:50.194612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:45.439 [2024-11-17 09:36:50.316653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:45.439 [2024-11-17 09:36:50.316752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:45.439 [2024-11-17 09:36:50.316775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:45.439 [2024-11-17 09:36:50.316794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:45.439 [2024-11-17 09:36:50.316809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:45.439 [2024-11-17 09:36:50.319266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:45.439 [2024-11-17 09:36:50.319359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:45.439 [2024-11-17 09:36:50.319444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:45.439 [2024-11-17 09:36:50.319447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.374 Malloc0 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.374 [2024-11-17 09:36:51.141893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.374 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.375 [2024-11-17 09:36:51.172029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3141587 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:46.375 09:36:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:48.285 09:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3141398 00:36:48.285 09:36:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 [2024-11-17 09:36:53.210109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 [2024-11-17 09:36:53.210822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Write completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.285 Read completed with error (sct=0, sc=8) 00:36:48.285 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 [2024-11-17 09:36:53.211565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Write completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 Read completed with error (sct=0, sc=8) 00:36:48.286 starting I/O failed 00:36:48.286 [2024-11-17 09:36:53.212188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:48.286 [2024-11-17 09:36:53.212430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.212485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.212630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.212666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.212821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.212857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.213000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.213035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.213173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.213207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.213337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.213377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.213517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.213567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.213780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.213831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.213980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.214016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.214201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.214239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.214424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.214458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.214604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.214637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.214758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.214791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.214925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.214964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.215149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.215186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.215336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.215397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.215554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.215602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.215764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.215814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.215999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.216033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.216197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.216231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.216364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.216415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.216555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.216589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.216784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.216819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.286 qpair failed and we were unable to recover it. 00:36:48.286 [2024-11-17 09:36:53.216960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.286 [2024-11-17 09:36:53.216998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.217247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.217280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.217433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.217467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.217579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.217613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.217784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.217833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.218000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.218035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.218290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.218323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.218451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.218495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.218614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.218648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.218803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.218839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.218987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.219038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.219159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.219196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.219410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.219445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.219555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.219590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.219721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.219758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.219959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.219997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.220141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.220180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.220379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.220436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.220571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.220606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.220768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.220803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.220996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.221031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.221171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.221205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.221380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.221415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.221524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.221558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.221670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.221723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.221839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.221876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.222023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.222085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.222267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.222304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.222452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.222487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.222615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.222664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.222819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.222861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.223021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.223056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.223160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.223195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.223379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.223533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.223567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.223715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.223751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.223922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.223955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.287 [2024-11-17 09:36:53.224066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.287 [2024-11-17 09:36:53.224099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.287 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.224276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.224313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.224458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.224498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.224637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.224803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.224836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.226456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.226504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.226654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.226687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.226822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.226856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.226995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.227028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.227148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.227181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.227365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.227427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.227564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.227600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.227717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.227752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.227887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.227921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.228056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.228091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.228216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.228264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.228445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.228488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.228617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.228650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.228816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.228849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.229015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.229048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.229178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.229211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.229308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.229341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.229506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.229555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.229706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.229743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.229881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.229917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.230053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.230087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.230255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.230288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.230454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.230496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.230606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.230640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.230798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.230832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.230973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.231006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.231172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.231208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.231340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.231380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.231575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.231629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.231779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.231816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.231944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.231980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.232111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.232145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.232319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.288 [2024-11-17 09:36:53.232459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.288 [2024-11-17 09:36:53.232493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.288 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.232594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.232627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.232813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.232847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.232982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.233016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.233143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.233176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.233349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.233389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.233538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.233571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.233756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.233795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.233964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.233999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.234115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.234148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.234308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.234508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.234543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.234703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.234738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.234898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.234932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.235062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.235095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.235215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.235249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.235446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.235494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.235654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.235702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.235850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.235885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.236001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.236035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.236194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.236228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.236384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.236573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.236608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.236770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.236804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.236964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.236997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.237140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.237175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.237283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.237318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.237462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.237498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.237606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.237641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.237786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.237820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.237960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.237994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.289 [2024-11-17 09:36:53.238154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.289 [2024-11-17 09:36:53.238189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.289 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.238352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.238392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.238547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.238583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.238734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.238770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.238879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.238918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.239052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.239085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.239210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.239244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.239414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.239463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.239585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.239621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.239765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.239800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.239934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.239968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.240103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.240136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.240267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.240300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.240418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.240453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.240589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.240622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.240751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.240785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.240924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.240958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.241067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.241100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.241273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.241307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.241488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.241523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.241664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.241697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.241826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.241860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.242014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.242047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.242208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.242241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.242398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.242432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.242537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.242570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.242685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.242718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.242855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.242889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.243017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.243050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.243150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.243184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.243339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.243405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.243531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.243567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.243733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.243767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.243931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.243964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.244108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.244143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.244280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.244313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.244438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.244472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.244661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.244695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.290 qpair failed and we were unable to recover it. 00:36:48.290 [2024-11-17 09:36:53.244804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.290 [2024-11-17 09:36:53.244838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.244983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.245152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.245300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.245457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.245599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.245775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.245943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.245977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.246136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.246169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.246292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.246324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.246447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.246492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.246627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.246727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.246760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.246900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.246934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.247042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.247075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.247184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.247217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.247348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.247389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.247514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.247564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.247690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.247729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.247874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.247910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.248060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.248094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.248207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.248241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.248381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.248427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.248597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.248796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.248830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.248933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.248966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.249102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.249135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.249266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.249299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.249458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.249492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.249626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.249663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.249826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.249860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.249993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.250028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.250196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.250230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.250390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.250425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.250570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.250604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.250747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.250781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.250886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.250920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.251059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.291 [2024-11-17 09:36:53.251092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.291 qpair failed and we were unable to recover it. 00:36:48.291 [2024-11-17 09:36:53.251230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.251263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.251569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.251603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.251708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.251741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.251857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.251892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.252037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.252071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.252204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.252238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.252402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.252441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.252616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.252650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.252792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.252832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.252966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.253000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.253139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.253172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.253336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.253524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.253558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.253716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.253749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.253896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.253929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.254039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.254075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.254210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.254245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.254381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.254423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.254579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.254613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.254719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.254894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.254928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.255033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.255067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.255183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.255216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.255354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.255394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.255571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.255605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.255712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.255745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.255872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.255905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.256010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.256044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.256199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.256235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.256430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.256489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.256643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.256678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.256785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.256821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.256986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.257020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.257125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.257160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.257311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.257344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.257485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.257519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.257656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.257690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.292 [2024-11-17 09:36:53.257801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.292 [2024-11-17 09:36:53.257833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.292 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.257968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.258016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.258169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.258205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.258380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.258424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.258593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.258627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.258791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.258824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.259002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.259036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.259174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.259210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.259376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.259433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.259588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.259624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.259774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.259810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.259918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.259958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.260088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.260122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.260255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.260289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.260445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.260493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.260648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.260684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.260824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.260859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.261004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.261037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.261133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.261166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.261319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.261388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.261543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.261579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.261751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.261920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.261954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.262062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.262097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.262232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.262266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.262446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.262482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.262643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.262680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.262818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.262852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.262959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.262994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.263157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.263191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.263358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.263399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.263544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.263579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.263698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.263747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.263877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.263914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.264059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.264094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.264231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.264265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.264381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.264427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.264576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.293 [2024-11-17 09:36:53.264609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.293 qpair failed and we were unable to recover it. 00:36:48.293 [2024-11-17 09:36:53.264721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.264755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.264893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.264926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.265026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.265059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.265204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.265238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.265379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.265420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.265600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.265635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.265750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.265784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.265922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.265955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.266094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.266128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.266264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.266298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.266423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.266458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.266574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.266609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.266712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.266745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.266891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.266929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.267056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.267267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.267301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.267403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.267438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.267614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.267649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.267752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.267786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.267931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.267965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.268079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.268226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.268259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.268375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.268409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.268582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.268616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.268752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.268786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.268946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.268979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.269112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.269146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.269283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.269316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.269445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.269479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.269626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.269662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.269822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.269856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.269971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.270006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.294 [2024-11-17 09:36:53.270145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.294 [2024-11-17 09:36:53.270178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.294 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.270310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.270343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.270496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.270529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.270696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.270730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.270871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.270904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.271042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.271075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.271178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.271211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.271320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.271353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.271538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.271598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.271768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.271812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.272019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.272067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.272205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.272239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.272374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.272427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.272593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.272725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.272759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.272896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.272929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.273050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.273087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.273203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.273240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.273434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.273571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.273604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.273744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.273777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.273925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.273975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.274175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.274213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.274358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.274432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.274565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.274598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.274729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.274762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.274903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.274936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.275041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.275074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.275203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.275236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.275355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.275414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.275559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.275595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.275758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.275793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.275959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.275993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.276154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.276192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.276338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.276380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.276499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.276533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.276708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.276741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.276885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.295 [2024-11-17 09:36:53.276922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.295 qpair failed and we were unable to recover it. 00:36:48.295 [2024-11-17 09:36:53.277094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.277131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.277236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.277273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.277430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.277465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.277577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.277612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.277772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.277810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.277949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.277988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.278133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.278170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.278315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.278372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.278518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.278553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.278677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.278867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.278917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.279052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.279086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.279225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.279262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.279421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.279454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.279583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.279617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.279743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.279776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.279882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.280039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.280076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.280207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.280246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.280404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.280453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.280628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.280664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.280828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.280862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.280999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.281033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.281193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.281235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.281385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.281564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.281612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.281756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.281792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.281907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.281942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.282057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.282091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.282259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.282293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.282454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.282489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.282600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.282635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.282769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.282802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.282964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.283001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.283143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.283179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.283365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.283416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.283516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.283549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.283666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.296 [2024-11-17 09:36:53.283699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.296 qpair failed and we were unable to recover it. 00:36:48.296 [2024-11-17 09:36:53.283846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.283883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.284081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.284216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.284390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.284542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.284710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.284846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.284984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.285017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.285139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.285175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.285287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.285322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.285474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.285509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.285604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.285637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.285810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.285844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.286018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.286155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.286190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.286329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.286362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.286498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.286532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.286642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.286676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.286845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.286878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.287046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.287191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.287342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.287517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.287672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.287845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.287983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.288192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.288292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.288325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.288461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.288495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.288647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.288685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.297 qpair failed and we were unable to recover it. 00:36:48.297 [2024-11-17 09:36:53.288794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.297 [2024-11-17 09:36:53.288829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.288956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.288991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.289096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.289131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.289243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.289279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.289441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.289476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.289653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.289763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.289802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.289915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.289948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.290084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.290119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.290263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.290298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.290408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.290444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.290547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.290581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.290720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.290754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.290865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.290901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.291051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.291086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.291221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.291255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.291381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.291416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.291587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.291621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.291754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.291788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.291945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.291978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.292078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.292111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.292271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.292304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.292442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.292478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.292620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.292657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.292816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.292850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.293009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.293192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.293227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.293361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.293404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.293508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.293543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.293676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.293710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.293869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.293902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.294036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.294069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.294225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.294273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.294424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.294550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.294584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.294710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.294914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.294948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.295052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.295085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.295215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.295248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.295384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.295418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.295537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.295572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.295716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.295757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.295889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.295923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.296065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.296101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.296242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.296276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.296415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.296450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.296551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.296585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.296711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.296744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.296873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.296907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.297050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.297085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.297196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.297231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.297338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.297379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.297491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.297524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.297688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.297721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.297853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.297886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.298057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.298092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.298233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.298268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.298409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.637 [2024-11-17 09:36:53.298443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.637 qpair failed and we were unable to recover it. 00:36:48.637 [2024-11-17 09:36:53.298569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.298736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.298771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.298884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.298917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.299029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.299064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.299181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.299230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.299349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.299393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.299568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.299716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.299750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.299944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.299978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.300129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.300165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.300301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.300337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.300536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.300570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.300679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.300712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.300816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.300850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.301011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.301042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.301182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.301218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.301398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.301445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.301555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.301590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.301720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.301753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.301907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.301940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.302040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.302083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.302203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.302237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.302401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.302433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.302566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.302598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.302728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.302761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.302898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.302932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.303040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.303072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.303211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.303245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.303477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.303526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.303656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.303695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.303833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.303868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.303974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.304007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.304154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.304188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.304289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.304322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.304463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.304508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.304670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.304703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.304845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.304879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.305014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.305048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.305190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.305223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.305360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.305399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.305537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.305571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.305680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.305713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.305819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.305852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.306017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.306050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.306193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.306230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.306361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.306419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.306534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.306572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.306714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.306749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.306877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.306911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.307029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.307062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.307188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.307222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.307360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.307402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.307562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.307594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.307732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.307765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.307922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.307955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.308067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.308100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.308238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.308271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.308410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.308459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.308585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.308621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.308761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.308795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.308935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.308969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.309112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.309147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.309284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.309489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.309524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.309635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.309668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.309804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.309837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.309949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.309981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.310092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.310125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.310256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.310290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.310429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.310465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.310599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.310647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.310780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.310821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.310968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.311117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.311283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.311434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.311606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.311753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.311920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.311952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.312065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.312098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.312236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.312269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.312406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.312440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.312558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.312592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.312734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.312767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.312904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.312942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.313104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.313137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.313234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.638 [2024-11-17 09:36:53.313267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.638 qpair failed and we were unable to recover it. 00:36:48.638 [2024-11-17 09:36:53.313393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.313430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.313610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.313658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.313782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.313819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.313955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.313990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.314130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.314164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.314266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.314301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.314448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.314483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.314606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.314662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.314837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.314878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.314986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.315162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.315310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.315496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.315669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.315809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.315943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.315977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.316146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.316181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.316313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.316347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.316479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.316512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.316674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.316707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.316819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.316852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.316955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.316988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.317134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.317170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.317315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.317349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.317473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.317508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.317624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.317659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.317798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.317832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.317937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.317971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.318105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.318245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.318427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.318568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.318702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.318862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.318971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.319161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.319195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.319323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.319382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.319513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.319552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.319662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.319695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.319835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.319868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.320033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.320203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.320340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.320506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.320668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.320839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.320969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.321003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.321181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.321215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.321340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.321395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.321537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.321571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.321706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.321739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.321859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.321893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.322036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.322205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.322413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.322558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.322704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.322868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.322985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.323018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.323154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.323187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.323325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.323503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.323552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.323691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.323727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.323874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.323909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.324031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.324066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.324174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.324207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.324341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.324396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.324580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.324617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.324735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.324769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.324933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.324967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.325108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.325142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.325288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.325322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.325471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.325506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.325639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.325676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.325821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.325855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.325989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.326024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.639 [2024-11-17 09:36:53.326141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.639 [2024-11-17 09:36:53.326176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.639 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.326311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.326350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.326503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.326539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.326684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.326718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.326840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.326873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.327005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.327039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.327185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.327218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.327348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.327387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.327530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.327566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.327704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.327748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.327907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.327942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.328078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.328112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.328277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.328310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.328452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.328486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.328596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.328631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.328767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.328801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.328933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.328966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.329069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.329102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.329204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.329238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.329379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.329412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.329527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.329561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.329700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.329735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.329886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.330037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.330232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.330374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.330536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.330702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.330869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.330976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.331116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.331259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.331413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.331582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.331742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.331882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.331917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.332096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.332268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.332545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.332690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.332873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.332992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.333027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.333188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.333223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.333325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.333359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.333509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.333542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.333676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.333708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.333814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.333847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.333979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.334012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.334145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.334178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.334314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.334348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.334461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.334497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.334641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.334675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.334800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.334834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.334966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.335001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.335145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.335180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.335324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.335358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.335481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.335514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.335659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.335693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.335795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.335828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.335984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.336118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.336257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.336428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.336600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.336745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.336903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.336936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.337071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.337105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.337270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.337303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.337451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.337487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.337594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.337734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.337768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.337946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.338966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.338999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.339117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.339151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.339283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.339322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.339450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.339485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.339590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.339624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.339733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.339766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.339898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.339932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.340076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.340110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.340273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.340306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.640 qpair failed and we were unable to recover it. 00:36:48.640 [2024-11-17 09:36:53.340427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.640 [2024-11-17 09:36:53.340461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.340592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.340626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.340773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.340807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.340924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.340957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.341097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.341131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.341257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.341306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.341432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.341468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.341615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.341649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.341764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.341798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.341966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.341999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.342108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.342141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.342244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.342280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.342439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.342475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.342582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.342627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.342759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.342793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.342929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.342962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.343103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.343137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.343261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.343309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.343454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.343491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.343597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.343632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.343756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.343789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.343922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.343955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.344095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.344129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.344265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.344300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.344413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.344447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.344554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.344588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.344726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.344760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.344867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.344901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.345015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.345048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.345190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.345225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.345363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.345404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.345537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.345713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.345746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.345847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.345885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.346961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.346995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.347131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.347165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.347308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.347342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.347463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.347496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.347627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.347660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.347768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.347801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.347968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.348860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.348995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.349137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.349170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.349335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.349375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.349504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.349541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.349684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.349718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.349851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.349885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.350053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.350231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.350405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.350573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.350718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.350976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.351179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.351316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.351479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.351628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.351781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.351922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.351956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.352079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.352112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.352249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.352288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.352435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.352470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.352600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.352745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.352778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.352921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.353027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.353061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.353166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.353201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.353335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.353375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.353498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.641 [2024-11-17 09:36:53.353532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.641 qpair failed and we were unable to recover it. 00:36:48.641 [2024-11-17 09:36:53.353656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.353689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.353788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.353821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.353930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.353963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.354079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.354112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.354275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.354307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.354432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.354465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.354572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.354605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.354749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.354782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.354886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.355055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.355222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.355412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.355552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.355716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.355851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.355994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.356028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.356155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.356203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.356421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.356539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.356574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.356708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.356743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.356888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.356923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.357961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.357994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.358133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.358168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.358277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.358311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.358461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.358496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.358655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.358694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.358802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.358835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.359041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.359187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.359358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.359507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.359673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.359835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.359977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.360010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.360138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.360171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.360308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.360342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.360502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.360549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.360705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.360741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.360871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.360905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.361049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.361083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.361212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.361246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.361412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.361447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.361556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.361591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.361732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.361766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.361895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.361928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.362030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.362063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.362170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.362203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.362342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.362521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.362554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.362675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.362709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.362853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.362886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.363034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.363067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.363253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.363301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.363451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.363490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.363628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.363662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.363810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.363844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.363994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.364175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.364311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.364463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.364598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.364737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.364893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.364926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.365024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.365057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.365191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.365225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.365364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.365413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.365545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.365578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.365697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.365746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.365859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.365895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.366036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.366073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.366207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.366242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.366381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.366415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.366555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.366590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.366727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.366760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.366900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.366935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.367109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.367255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.367400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.367573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.367713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.367858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.642 [2024-11-17 09:36:53.367965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.642 [2024-11-17 09:36:53.368002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.642 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.368130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.368165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.368266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.368299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.368492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.368540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.368657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.368692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.368795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.368829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.368941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.368974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.369137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.369171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.369284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.369318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.369427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.369461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.369592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.369625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.369769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.369802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.369940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.369973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.370107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.370140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.370272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.370305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.370411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.370445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.370568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.370607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.370777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.370812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.370924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.371077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.371112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.371244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.371435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.371483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.371612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.371646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.371791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.371825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.371961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.371999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.372106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.372139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.372239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.372272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.372406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.372441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.372577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.372611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.372728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.372763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.372927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.372961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.373966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.373999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.374140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.374173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.374295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.374331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.374498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.374551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.374719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.374756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.374900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.375040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.375075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.375212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.375246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.375392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.375427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.375566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.375613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.375721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.375755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.375920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.376950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.376998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.377145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.377181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.377285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.377320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.377465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.377500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.377674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.377783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.377816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.377949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.377983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.378119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.378153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.378385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.378519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.378572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.378713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.378750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.378863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.378905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.379077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.379112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.379247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.379283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.379438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.379474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.379588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.379626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.379748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.379782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.379940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.379973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.380082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.380115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.380222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.380256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.380401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.380435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.380592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.380625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.380733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.380766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.380900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.380934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.381075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.381109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.381220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.381253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.381363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.381404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.381544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.643 [2024-11-17 09:36:53.381680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.643 [2024-11-17 09:36:53.381714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.643 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.381850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.381883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.382041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.382180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.382337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.382523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.382667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.382802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.382964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.383135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.383284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.383473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.383646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.383784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.383960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.383993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.384127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.384160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.384265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.384298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.384464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.384513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.384689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.384725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.384856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.384891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.385003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.385038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.385156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.385190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.385304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.385339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.385510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.385544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.385675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.385709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.385824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.385858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.386937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.386973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.387107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.387142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.387256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.387292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.387417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.387452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.387573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.387608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.387713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.387747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.387890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.387924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.388064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.388097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.388226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.388274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.388409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.388445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.388588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.388623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.388732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.388768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.388907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.388941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.389080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.389115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.389244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.389411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.389459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.389629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.389673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.389836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.389885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.390047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.390228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.390376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.390546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.390716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.390869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.390982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.391147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.391292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.391452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.391619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.391756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.391958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.391993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.392130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.392164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.392263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.392297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.392438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.392487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.392633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.392669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.392814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.392848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.392960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.392994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.393101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.393134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.393261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.393296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.393405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.393441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.393548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.393583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.393697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.393757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.393911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.393946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.394105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.394276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.394310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.394432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.394466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.394598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.394631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.394794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.394828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.394929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.394962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.395099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.395132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.395260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.395297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.395423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.395463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.395592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.395627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.395738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.395772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.395949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.395984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.396084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.396282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.396325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.644 qpair failed and we were unable to recover it. 00:36:48.644 [2024-11-17 09:36:53.396486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.644 [2024-11-17 09:36:53.396522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.396630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.396679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.396835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.396869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.397000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.397035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.397179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.397213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.397383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.397418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.397554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.397590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.397695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.397729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.397865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.397898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.398031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.398065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.398181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.398216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.398341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.398400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.398579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.398626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.398797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.398832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.398939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.398992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.399152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.399187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.399292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.399327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.399498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.399532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.399649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.399685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.399823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.399858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.400048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.400101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.400207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.400240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.400383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.400419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.400537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.400571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.400710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.400744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.400907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.400940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.401140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.401273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.401307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.401416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.401450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.401588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.401624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.401778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.401950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.401985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.402144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.402177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.402278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.402313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.402468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.402503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.402609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.402642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.402749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.402782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.402896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.402935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.403089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.403124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.403290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.403328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.403521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.403669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.403703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.403823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.403856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.404015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.404048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.404198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.404233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.404376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.404416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.404524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.404558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.404695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.404728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.404870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.404903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.405038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.405071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.405206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.405398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.405447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.405585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.405622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.405741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.405775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.405935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.405968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.406081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.406114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.406239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.406273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.406404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.406437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.406594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.406628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.406762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.406796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.406932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.406965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.407123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.407157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.407280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.407316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.407470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.407518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.407642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.407678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.407837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.407871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.408956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.408990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.409133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.409166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.409278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.409316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.409426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.409462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.409605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.409638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.409750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.409784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.409952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.409986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.410118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.410156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.410335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.645 [2024-11-17 09:36:53.410376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.645 qpair failed and we were unable to recover it. 00:36:48.645 [2024-11-17 09:36:53.410513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.410548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.410709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.410742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.410884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.410924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.411054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.411087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.411219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.411253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.411391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.411426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.411594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.411628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.411743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.411777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.411882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.411916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.412051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.412084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.412247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.412280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.412449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.412483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.412627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.412662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.412803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.412836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.412975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.413009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.413149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.413183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.413322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.413356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.413469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.413502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.413637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.413670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.413805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.413838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.413987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.414021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.414166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.414201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.414313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.414346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.414461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.414495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.414609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.414656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.414793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.414827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.414990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.415024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.415166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.415199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.415310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.415344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.415488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.415523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.415691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.415727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.415889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.415923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.416055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.416088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.416251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.416292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.416425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.416459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.416591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.416624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.416762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.416797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.416936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.416970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.417135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.417284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.417453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.417618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.417758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.417898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.417994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.418028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.418175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.418215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.418351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.418393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.418530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.418563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.418672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.418706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.418840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.418873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.418983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.419022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.419156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.419190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.419329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.419363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.419480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.419514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.419614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.419647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.419801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.420008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.420042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.420185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.420220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.420383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.420417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.420546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.420579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.420683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.420717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.420881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.420914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.421043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.421077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.421206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.421239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.421338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.421379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.421557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.421590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.421694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.421727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.421872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.421905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.422044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.422238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.422275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.422382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.422427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.422543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.422578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.422715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.422749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.422886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.422919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.423077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.423111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.423252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.423286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.423400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.423434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.423561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.423595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.423729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.423767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.423886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.423920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.424054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.424088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.424199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.424235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.424364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.424409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.424544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.424578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.424710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.424744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.424903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.424940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.425096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.425132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.425244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.646 [2024-11-17 09:36:53.425279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.646 qpair failed and we were unable to recover it. 00:36:48.646 [2024-11-17 09:36:53.425440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.425475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.425589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.425623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.425782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.425815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.425951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.425984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.426092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.426126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.426276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.426309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.426424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.426459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.426599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.426634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.426767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.426800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.426931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.426964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.427097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.427263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.427303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.427414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.427581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.427615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.427745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.427778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.427905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.427938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.428075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.428107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.428245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.428279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.428412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.428446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.428583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.428619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.428793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.428827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.428946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.428980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.429085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.429117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.429274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.429308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.429456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.429490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.429630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.429663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.429827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.429861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.429991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.430025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.430187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.430221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.430331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.430365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.430506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.430543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.430648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.430682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.430821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.430854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.431015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.431048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.431212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.431245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.431382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.431416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.431548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.431581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.431747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.431781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.431884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.431917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.432045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.432078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.432258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.432294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.432439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.432472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.432601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.432634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.432737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.432770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.432912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.432945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.433059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.433094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.433200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.433234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.433397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.433430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.433535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.433568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.433732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.433766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.433901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.433934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.434036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.434069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.434197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.434230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.434407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.434545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.434578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.434712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.434746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.434887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.434920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.435063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.435096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.435228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.435262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.435424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.435457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.435596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.435629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.435792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.435826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.435918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.435952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.436087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.436120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.436262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.436295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.436412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.436445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.436544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.436577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.436739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.436788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.436958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.436997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.437167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.437205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.437363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.437429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.437563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.437597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.437752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.437785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.437891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.437925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.438037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.438070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.438204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.438239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.438399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.438606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.438639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.438771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.438806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.438947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.438980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.439117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.439150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.439331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.439365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.439497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.647 [2024-11-17 09:36:53.439544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.647 qpair failed and we were unable to recover it. 00:36:48.647 [2024-11-17 09:36:53.439680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.439715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.439852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.439885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.439998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.440031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.440130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.440165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.440322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.440355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.440491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.440524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.440638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.440672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.440828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.440861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.440993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.441317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.441495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.441663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.441812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.441963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.441995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.442137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.442171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.442275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.442310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.442485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.442519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.442658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.442692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.442826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.442859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.442998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.443032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.443161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.443194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.443332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.443365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.443506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.443539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.443678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.443712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.443842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.443875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.444014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.444047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.444151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.444190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.444299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.444334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.444476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.444510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.444647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.444680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.444838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.444871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.445005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.445039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.445174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.445207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.445373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.445407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.445564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.445598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.445698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.445731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.445870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.445903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.446035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.446068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.446200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.446234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.446364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.446403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.446538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.446572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.446702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.446736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.446865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.446897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.447024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.447057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.447196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.447229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.447373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.447507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.447540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.447684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.447720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.447892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.447926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.448063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.448096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.448251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.448284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.448418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.448453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.448612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.448645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.448794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.448827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.448983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.449016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.449149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.449183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.449320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.449354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.449495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.449529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.449662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.449695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.449803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.449836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.449973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.450169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.450312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.450463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.450609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.450762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.450953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.450991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.451157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.451192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.451325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.451357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.451474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.451508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.451645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.451678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.451782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.451816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.451948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.451981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.452119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.452152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.452289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.452323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.452444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.452636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.452669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.452778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.452811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.452951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.452984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.453095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.453128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.453232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.453411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.453445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.453579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.453611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.453726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.453759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.453892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.453925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.454059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.454102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.454202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.648 [2024-11-17 09:36:53.454236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.648 qpair failed and we were unable to recover it. 00:36:48.648 [2024-11-17 09:36:53.454400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.454434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.454549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.454583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.454746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.454779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.454939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.454972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.455111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.455144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.455306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.455339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.455492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.455540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.455714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.455757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.456049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.456247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.456309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.456437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.456471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.456607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.456744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.456777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.456925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.456958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.457123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.457160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.457312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.457444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.457492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.457624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:36:48.649 [2024-11-17 09:36:53.457831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.457892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.458088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.458147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.458310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.458343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.458514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.458548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.458667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.458701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.458823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.458859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.459031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.459067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.459212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.459249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.459432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.459466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.459627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.459661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.459801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.459833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.459950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.459987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.460131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.460169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.460291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.460340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.460488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.460521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.460660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.460698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.460814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.461020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.461057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.461213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.461249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.461395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.461446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.461549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.461582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.461715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.461748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.461858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.461891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.462038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.462075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.462231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.462268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.462413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.462447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.462571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.462609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.462716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.462754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.462882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.463066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.463104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.463240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.463289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.463424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.463457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.463581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.463718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.463751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.463929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.464062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.464099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.464212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.464248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.464411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.464567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.464604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.464712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.464748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.464888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.464921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.465046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.465083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.465207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.465345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.465390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.465548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.465581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.465716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.465749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.465847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.465880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.466037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.466074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.466205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.466241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.466411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.466445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.466557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.466590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.466723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.466756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.466895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.466928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.467053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.467090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.467240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.467290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.467411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.467450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.467561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.467595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.467721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.467755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.467866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.467899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.468055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.468092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.468238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.468290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.468426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.468460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.649 qpair failed and we were unable to recover it. 00:36:48.649 [2024-11-17 09:36:53.468596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.649 [2024-11-17 09:36:53.468629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.468749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.468800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.468951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.468989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.469141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.469177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.469319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.469374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.469488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.469521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.469626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.469659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.469835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.469937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.469970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.470079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.470115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.470246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.470445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.470478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.470587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.470638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.470804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.470840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.471037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.471073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.471276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.471314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.471475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.471508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.471612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.471644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.471812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.471847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.471985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.472020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.472151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.472186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.472365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.472407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.472525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.472558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.472686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.472718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.472835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.472872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.472996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.473032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.473179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.473215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.473396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.473431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.473555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.473587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.473736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.473772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.473897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.473934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.474082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.474118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.474239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.474275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.474394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.474432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.474564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.474597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.474708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.474759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.474872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.474908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.475102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.475137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.475259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.475293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.475500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.475534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.475672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.475705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.475838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.475871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.475987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.476126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.476258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.476399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.476572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.476744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.476920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.476953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.477055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.477087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.477223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.477257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.477419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.477452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.477618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.477652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.477790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.477823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.477932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.477964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.478134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.478274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.478440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.478728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.478863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.478991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.479023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.479154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.479186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.479316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.479351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.479540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.479574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.479672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.479705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.479845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.479879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.480916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.480953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.481129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.481163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.481281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.481314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.481467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.481578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.481610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.481758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.481791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.481925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.481957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.482065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.482098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.482200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.482232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.482378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.482412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.482519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.482551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.482708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.482741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.482903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.482936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.483068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.650 [2024-11-17 09:36:53.483100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.650 qpair failed and we were unable to recover it. 00:36:48.650 [2024-11-17 09:36:53.483219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.483252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.483396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.483430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.483586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.483618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.483731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.483763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.483931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.483965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.484074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.484106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.484281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.484317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.484473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.484507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.484607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.484640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.484775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.484808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.484938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.484971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.485111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.485144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.485259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.485291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.485435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.485469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.485589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.485622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.485738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.485771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.485908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.485941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.486101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.486134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.486298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.486331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.486483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.486517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.486623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.486656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.486794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.486828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.486936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.486969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.487077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.487112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.487222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.487255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.487362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.487403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.487536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.487574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.487712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.487745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.487904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.487938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.488130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.488267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.488446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.488586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.488726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.488879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.489019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.489121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.489174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.489325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.489362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.489547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.489579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.489719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.489751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.489861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.489894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.490001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.490034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.490165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.490199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.490337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.490379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.490490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.490523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.490690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.490722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.490860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.490893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.491037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.491070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.491201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.491234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.491405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.491439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.491554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.491586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.491716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.491749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.491909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.491942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.492055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.492088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.492185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.492217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.492354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.492397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.492500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.492533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.492644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.492677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.492836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.492868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.493184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.493353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.493547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.493698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.493869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.493971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.494003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.494109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.494145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.494328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.494375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.494512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.494544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.494651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.494685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.494843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.494876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.495039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.495203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.495402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.495564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.495718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.495854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.495969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.496002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.496127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.496160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.496291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.496323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.496448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.496483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.496590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.651 [2024-11-17 09:36:53.496624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.651 qpair failed and we were unable to recover it. 00:36:48.651 [2024-11-17 09:36:53.496727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.496760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.496926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.496958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.497092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.497125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.497227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.497260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.497396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.497430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.497547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.497581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.497733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.497789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.497927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.497965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.498103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.498138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.498300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.498335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.498452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.498486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.498615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.498662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.498843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.498881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.499051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.499088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.499280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.499312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.499469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.499503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.499604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.499637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.499739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.499791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.500009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.500071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.500224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.500260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.500377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.500427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.500584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.500617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.500726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.500760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.500918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.500955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.501155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.501197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.501413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.501465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.501607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.501748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.501781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.501942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.501979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.502155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.502191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.502338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.502543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.502576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.502706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.502738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.502932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.502969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.503108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.503144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.503312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.503347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.503479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.503512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.503625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.503658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.503799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.503852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.504008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.504045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.504217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.504255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.504420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.504454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.504565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.504598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.504718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.504751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.504860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.504893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.505086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.505120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.505238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.505273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.505438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.505472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.505581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.505614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.505730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.505763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.505897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.505929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.506142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.506208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.506376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.506422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.506621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.506676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.506830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.506870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.507022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.507060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.507260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.507446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.507481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.507586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.507619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.507788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.507823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.507962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.507999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.508168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.508204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.508336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.508378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.508508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.508540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.508653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.508690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.508832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.508869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.509025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.509064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.509226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.509417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.509453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.509588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.509622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.509767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.509801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.509935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.509974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.510151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.510189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.510318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.510495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.510528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.510660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.510692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.510797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.510839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.511004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.511039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.511219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.511362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.511422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.511528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.511561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.511670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.511703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.511827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.511862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.512008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.512044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.512193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.512232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.512365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.512432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.512548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.512583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.512742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.512775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.512884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.512917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.513105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.513143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.513274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.513312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.652 [2024-11-17 09:36:53.513457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.652 [2024-11-17 09:36:53.513490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.652 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.513624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.513675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.513864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.513901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.514076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.514340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.514382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.514498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.514531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.514641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.514674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.514831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.514867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.515031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.515066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.515203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.515239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.515375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.515409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.515585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.515693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.515726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.515897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.515939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.516062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.516100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.516243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.516405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.516439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.516550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.516583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.516719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.516751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.516902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.516938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.517073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.517122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.517284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.517320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.517455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.517488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.517625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.517659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.517783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.517821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.517967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.518004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.518128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.518165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.518297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.518333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.518497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.518543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.518759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.518817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.519001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.519060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.519197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.519234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.519380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.519430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.519549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.519582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.519710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.519746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.519867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.519920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.520077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.520113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.520238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.520270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.520430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.520479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.520594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.520632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.520810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.520870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.521010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.521048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.521187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.521224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.521339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.521384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.521541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.521574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.521682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.521715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.521819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.521869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.522021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.522055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.522193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.522229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.522423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.522457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.522579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.522616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.522765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.522802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.522945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.522983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.523130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.523168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.523342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.523393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.523497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.523530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.523643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.523676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.523781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.523815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.523948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.523981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.524122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.524160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.524268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.524305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.524448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.524483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.524645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.524680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.524794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.524828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.524925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.524959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.525135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.525187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.525337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.525384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.525543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.525577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.525720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.525754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.525888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.526006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.526039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.526192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.526229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.526395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.526430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.526540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.526573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.526678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.526711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.526815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.526848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.527059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.527096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.527322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.527359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.527545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.527579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.527685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.527718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.527893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.527935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.528071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.528107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.528220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.528257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.528426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.528460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.528578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.528611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.653 [2024-11-17 09:36:53.528743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.653 [2024-11-17 09:36:53.528795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.653 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.528908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.528944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.529052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.529089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.529196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.529233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.529425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.529458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.529557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.529590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.529744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.529781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.529928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.529965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.530108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.530146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.530286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.530320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.530441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.530475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.530605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.530638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.530775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.530828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.531009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.531045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.531190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.531228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.531431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.531465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.531575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.531608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.531718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.531770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.531896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.531928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.532096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.532133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.532253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.532290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.532427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.532461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.532596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.532630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.532775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.532809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.532928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.532964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.533155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.533193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.533310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.533347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.533481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.533514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.533666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.533703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.533849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.533885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.534965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.534998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.535139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.535176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.535356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.535426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.535556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.535589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.535725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.535758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.535875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.535908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.536039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.536072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.536226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.536262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.536429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.536463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.536574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.536607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.536771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.536803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.536938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.536971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.537109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.537142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.537250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.537283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.537395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.537429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.537529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.537562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.537699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.537733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.537835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.537869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.538045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.538211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.538349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.538503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.538667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.538835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.538979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.539012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.539120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.539153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.539301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.539335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.539496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.539690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.539723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.539870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.539902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.540242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.540396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.540539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.540713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.540888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.540997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.541031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.541187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.541224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.541341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.541387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.541547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.541584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.541722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.541755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.541861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.541894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.542059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.542238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.542415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.542549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.542717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.542887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.542992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.543125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.543297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.543456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.543630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.543806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.543941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.543973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.544084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.544117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.544277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.544310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.654 qpair failed and we were unable to recover it. 00:36:48.654 [2024-11-17 09:36:53.544455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.654 [2024-11-17 09:36:53.544488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.544597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.544631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.544772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.544805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.544938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.544971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.545111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.545245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.545415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.545599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.545736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.545872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.545976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.546009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.546125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.546158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.546306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.546339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.546507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.546552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.546686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.546727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.546913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.546967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.547122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.547160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.547305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.547342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.547522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.547556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.547720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.547753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.547860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.547893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.548047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.548084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.548271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.548437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.548470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.548631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.548802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.548976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.549012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.549245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.549282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.549391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.549450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.549565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.549604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.549742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.549779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.549887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.549923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.550075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.550112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.550288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.550321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.550466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.550499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.550612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.550645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.550804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.550841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.550951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.550987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.551112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.551162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.551291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.551325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.551541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.551594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.551756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.551796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.551939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.551977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.552089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.552127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.552261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.552300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.552481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.552521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.552656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.552690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.552822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.552855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.553009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.553046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.553187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.553235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.553433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.553467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.553615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.553647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.553794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.553831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.554008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.554045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.554195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.554228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.554405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.554474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.554599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.554635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.554785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.554823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.555026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.555091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.555199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.555236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.555476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.555510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.555623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.555656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.555799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.555841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.556012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.556065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.556217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.556255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.556426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.556461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.556585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.556622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.556796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.556850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.557040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.557077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.557281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.557318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.557487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.557521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.557664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.557698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.557861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.557898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.558065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.558125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.558291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.558445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.558479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.558618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.558652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.558787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.558820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.558969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.559005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.559172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.559225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.559382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.559435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.559581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.559615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.559750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.559783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.559956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.559989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.560171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.560208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.560354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.560427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.560574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.560608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.560736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.560773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.655 [2024-11-17 09:36:53.560883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.655 [2024-11-17 09:36:53.560919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.655 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.561073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.561109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.561251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.561301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.561440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.561474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.561655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.561691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.561868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.561904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.562033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.562082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.562220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.562257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.562432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.562466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.562597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.562630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.562742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.562775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.562908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.562945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.563079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.563116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.563265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.563301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.563461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.563499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.563636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.563669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.563785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.563818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.563955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.563989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.564149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.564182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.564342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.564382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.564508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.564541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.564667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.564700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.564836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.564869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.565010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.565042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.565234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.565387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.565424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.565567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.565599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.565734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.565766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.565906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.565939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.566041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.566074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.566181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.566214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.566326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.566360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.566489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.566522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.566640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.566673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.566808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.566841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.567003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.567036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.567164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.567197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.567327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.567360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.567538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.567570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.567703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.567735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.567870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.567903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.568047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.568080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.568187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.568238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.568435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.568469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.568576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.568609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.568746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.568779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.568941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.568974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.569105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.569138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.569297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.569330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.569476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.569510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.569659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.569692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.569796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.569829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.656 [2024-11-17 09:36:53.569968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.656 [2024-11-17 09:36:53.570001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.656 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.570111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.570145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.570287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.570325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.570452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.570485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.570635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.570668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.570775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.570808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.570936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.570969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.571079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.571112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.571246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.571279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.571393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.571432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.571536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.571568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.571707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.571740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.571858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.571890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.572003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.572036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.572195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.572228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.572376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.572419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.572553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.572585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.572717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.572750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.572866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.572899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.573033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.573066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.573204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.573237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.573335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.573375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.573486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.573520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.573680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.573728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.935 [2024-11-17 09:36:53.573896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.935 [2024-11-17 09:36:53.573932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.935 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.574040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.574074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.574206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.574240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.574384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.574430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.574550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.574585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.574690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.574725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.574920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.574954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.575083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.575116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.575220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.575254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.575380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.575420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.575556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.575589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.575725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.575758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.575859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.575892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.576055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.576088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.576205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.576241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.576381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.576425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.576561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.576595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.576728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.576761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.576919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.576957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.577061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.577094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.577209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.577244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.577376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.577585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.577619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.577782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.577815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.577951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.577985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.578118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.578151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.578284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.578319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.578440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.578483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.578587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.578620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.578779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.578812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.578924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.578957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.579087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.579120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.579252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.579293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.579433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.579466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.579566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.579599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.579730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.579764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.579899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.579932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.580040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.580073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.580189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.936 [2024-11-17 09:36:53.580223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.936 qpair failed and we were unable to recover it. 00:36:48.936 [2024-11-17 09:36:53.580364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.580404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.580510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.580544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.580653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.580686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.580787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.580820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.580920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.580953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.581078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.581111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.581253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.581287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.581413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.581447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.581564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.581597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.581759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.581792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.581928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.581961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.582130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.582263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.582296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.582420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.582454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.582610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.582658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.582785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.582820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.582932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.582968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.583132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.583165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.583323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.583358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.583511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.583550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.583686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.583720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.583855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.583889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.584021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.584055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.584213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.584246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.584381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.584433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.584569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.584602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.584729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.584762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.584866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.584899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.585038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.585071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.585206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.585239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.585375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.585408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.585518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.585552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.585734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.585782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.585906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.585942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.586080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.586114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.586250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.586285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.586457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.586492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.586624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.586657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.586791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.586825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.586987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.587020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.587183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.587216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.587316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.587349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.587518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.937 [2024-11-17 09:36:53.587658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.937 [2024-11-17 09:36:53.587691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.937 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.587826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.587859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.587996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.588030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.588194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.588228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.588327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.588360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.588506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.588539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.588698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.588731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.588844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.588877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.589014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.589047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.589225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.589276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.589435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.589469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.589631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.589664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.589769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.589803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.589960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.589994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.590099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.590133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.590267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.590300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.590460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.590515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.590661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.590697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.590831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.590866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.591061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.591198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.591336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.591488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.591657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.591803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.591968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.592001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.592103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.592136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.592269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.592303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.592455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.592491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.592625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.592660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.592799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.592834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.592969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.593003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.593145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.593178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.593316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.593349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.593466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.593500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.593637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.593671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.593807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.593840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.593977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.594010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.594145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.594178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.594289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.594326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.594512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.594547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.594680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.594713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.594850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.594884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.595014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.595048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.938 [2024-11-17 09:36:53.595181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.938 [2024-11-17 09:36:53.595214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.938 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.595348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.595388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.595521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.595555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.595712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.595745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.595852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.595885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.596010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.596043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.596171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.596204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.596333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.596373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.596510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.596544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.596674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.596708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.596845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.596878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.597001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.597035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.597136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.597174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.597333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.597504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.597537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.597697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.597730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.597869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.597902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.598029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.598062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.598221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.598254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.598387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.598421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.598523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.598557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.598734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.598768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.598928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.598961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.599090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.599123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.599254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.599287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.599420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.599454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.599583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.599631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.599837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.599996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.600030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.600182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.600237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.600388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.600448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.600608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.600641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.600775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.600809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.600953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.600987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.601118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.601151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.601285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.601318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.601485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.601519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.601656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.601830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.939 [2024-11-17 09:36:53.601864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.939 qpair failed and we were unable to recover it. 00:36:48.939 [2024-11-17 09:36:53.602010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.602043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.602176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.602210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.602340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.602380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.602495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.602528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.602687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.602720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.602850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.602884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.603016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.603050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.603174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.603208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.603388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.603424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.603578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.603612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.603743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.603776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.603888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.603922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.604061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.604094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.604227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.604264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.604404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.604440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.604609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.604642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.604747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.604780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.604918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.604951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.605055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.605089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.605253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.605290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.605453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.605487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.605646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.605679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.605815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.605849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.605953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.605987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.606129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.606162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.606269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.606302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.606446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.606480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.606649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.606807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.606841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.606975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.607130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.607297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.607482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.607629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.607807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.607953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.607987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.608121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.608260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.608292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.608454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.608489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.608624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.608658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.608820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.608867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.609039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.609077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.609212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.609245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.609424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.940 [2024-11-17 09:36:53.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.940 qpair failed and we were unable to recover it. 00:36:48.940 [2024-11-17 09:36:53.609578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.609612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.609712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.609745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.609848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.609900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.610026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.610063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.610175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.610211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.610347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.610390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.610517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.610550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.610695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.610732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.610884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.610934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.611077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.611114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.611319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.611479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.611527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.611667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.611703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.611839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.611873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.612004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.612039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.612189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.612226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.612423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.612458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.612590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.612624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.612759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.612792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.612936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.612969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.613121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.613160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.613315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.613349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.613492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.613531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.613642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.613675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.613822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.613858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.614013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.614050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.614193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.614231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.614466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.614500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.614594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.614627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.614768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.614802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.614926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.614964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.615136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.615173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.615329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.615373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.615497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.615530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.615688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.615721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.615827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.615860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.615999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.616041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.616188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.616224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.616381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.616433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.616593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.616760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.616794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.616944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.616980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.617197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.617234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.941 [2024-11-17 09:36:53.617350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.941 [2024-11-17 09:36:53.617412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.941 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.617542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.617590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.617791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.617831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.617979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.618016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.618166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.618203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.618385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.618425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.618573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.618608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.618797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.618862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.619123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.619178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.619355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.619419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.619555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.619589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.619692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.619724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.619864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.619896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.620065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.620100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.620262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.620295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.620422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.620456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.620588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.620622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.620777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.620810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.620939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.620972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.621107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.621141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.621303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.621340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.621503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.621537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.621709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.621742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.621873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.621905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.622038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.622071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.622185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.622220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.622390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.622425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.622560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.622593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.622730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.622764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.622894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.622927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.623107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.623141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.623271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.623337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.623487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.623533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.623789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.624013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.624071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.624291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.624326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.624497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.624539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.624688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.624722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.624845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.624883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.625059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.625096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.625264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.625318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.625489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.625525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.625664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.625698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.625859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.625893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.942 qpair failed and we were unable to recover it. 00:36:48.942 [2024-11-17 09:36:53.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.942 [2024-11-17 09:36:53.626052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.626224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.626261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.626471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.626519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.626646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.626699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.626855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.626893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.627040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.627078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.627268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.627305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.627470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.627504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.627639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.627673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.627846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.627885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.628059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.628096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.628247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.628284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.628418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.628453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.628578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.628612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.628750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.628785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.628936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.628973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.629136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.629173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.629312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.629349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.629521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.629554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.629691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.629724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.629857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.629890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.630043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.630081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.630240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.630292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.630450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.630486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.630600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.630634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.630776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.630947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.630980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.631121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.631154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.631316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.631354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.631521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.631560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.631674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.631708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.631817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.631851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.631991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.632025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.632172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.632208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.632357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.632580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.632612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.632729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.632762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.632872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.632925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.633048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.633088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.633201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.633253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.943 [2024-11-17 09:36:53.633448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.943 [2024-11-17 09:36:53.633482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.943 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.633612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.633645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.633800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.633838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.634027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.634211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.634248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.634405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.634439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.634615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.634650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.634760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.634793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.634918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.634966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.635152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.635188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.635333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.635377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.635502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.635537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.635670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.635722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.635891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.635956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.636216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.636273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.636465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.636499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.636667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.636702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.636905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.636964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.637081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.637118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.637244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.637277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.637416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.637449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.637580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.637613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.637709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.637742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.637921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.638093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.638142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.638322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.638361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.638524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.638557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.638714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.638747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.638863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.638897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.639096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.639161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.639305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.639342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.639520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.639553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.639678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.639712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.639815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.639848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.639956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.639991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.640153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.640185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.640291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.640324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.640459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.640493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.640631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.640665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.640824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.640857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.640998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.641032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.641192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.641225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.641365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.641407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.641547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.641581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.944 [2024-11-17 09:36:53.641685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.944 [2024-11-17 09:36:53.641718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.944 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.641879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.641913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.642014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.642047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.642224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.642260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.642434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.642469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.642635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.642670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.642801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.642835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.642966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.643130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.643269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.643412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.643578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.643753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.643919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.643952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.644051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.644085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.644196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.644229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.644364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.644404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.644534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.644567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.644696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.644729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.644866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.644899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.645045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.645080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.645212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.645256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.645394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.645428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.645561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.645595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.645765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.645965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.646102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.646137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.646293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.646326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.646446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.646480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.646618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.646652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.646786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.646819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.646978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.647011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.647117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.647151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.647263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.647300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.647498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.647533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.647667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.647702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.647863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.647896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.648002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.648036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.648174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.648209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.648351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.648393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.648552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.648585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.648716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.648749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.648923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.648956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.649093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.945 [2024-11-17 09:36:53.649126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.945 qpair failed and we were unable to recover it. 00:36:48.945 [2024-11-17 09:36:53.649262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.649298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.649425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.649458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.649592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.649625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.649754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.649788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.649953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.649987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.650145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.650178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.650313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.650347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.650494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.650527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.650643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.650676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.650785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.650819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.650932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.650966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.651129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.651162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.651321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.651354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.651497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.651530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.651640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.651673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.651834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.651867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.651980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.652013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.652150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.652203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.652348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.652394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.652550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.652583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.652715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.652748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.652883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.652920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.653048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.653080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.653215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.653248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.653402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.653451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.653561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.653597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.653740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.653774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.653892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.653927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.654061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.654095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.654256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.654290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.654404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.654439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.654548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.654581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.654717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.654749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.654885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.654918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.655049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.655082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.655221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.655254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.655374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.655411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.655545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.655580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.655702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.655735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.655839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.655872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.656041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.656075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.656213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.656246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.656393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.656427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.656557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.656591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.946 qpair failed and we were unable to recover it. 00:36:48.946 [2024-11-17 09:36:53.656745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.946 [2024-11-17 09:36:53.656778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.656943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.656976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.657108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.657141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.657270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.657303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.657442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.657478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.657617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.657651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.657761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.657795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.657956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.657989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.658124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.658158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.658297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.658331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.658456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.658492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.658627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.658660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.658788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.658821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.658954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.658987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.659088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.659122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.659262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.659295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.659415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.659451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.659583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.659621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.659762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.659796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.659933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.659967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.660149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.660187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.660300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.660338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.660540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.660575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.660737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.660775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.660891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.660924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.661058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.661091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.661254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.661287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.661419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.661453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.661570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.661605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.661741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.661774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.661886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.661920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.662088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.662123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.662261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.662294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.662426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.662460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.662605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.662639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.662775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.662808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.662936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.662969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.663084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.663119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.663257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.663291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.663408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.947 [2024-11-17 09:36:53.663441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.947 qpair failed and we were unable to recover it. 00:36:48.947 [2024-11-17 09:36:53.663606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.663639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.663769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.663802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.663935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.663968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.664103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.664137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.664300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.664346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.664566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.664608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.664817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.664878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.665136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.665191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.665326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.665387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.665579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.665612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.665769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.665802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.665911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.665962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.666104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.666142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.666283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.666336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.666540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.666576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.666778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.666845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.667060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.667118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.667270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.667313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.667480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.667513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.667652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.667685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.667810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.667843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.668000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.668038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.668185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.668222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.668395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.668443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.668554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.668587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.668739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.668787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.668936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.668972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.669167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.669205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.669375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.669410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.669550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.669584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.669699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.669732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.669849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.669901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.670070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.670106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.670302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.670339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.670473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.670507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.670639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.670673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.670811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.670848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.671002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.671042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.671175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.671237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.671399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.671433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.671532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.671566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.671737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.671771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.671925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.671962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.672102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.948 [2024-11-17 09:36:53.672139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.948 qpair failed and we were unable to recover it. 00:36:48.948 [2024-11-17 09:36:53.672318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.672355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.672485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.672519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.672686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.672782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.672815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.672972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.673009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.673184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.673221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.673380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.673432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.673585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.673624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.673777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.673815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.673962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.673999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.674242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.674276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.674423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.674458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.674591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.674799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.674842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.674983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.675042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.675164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.675202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.675335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.675376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.675524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.675560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.675703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.675756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.675933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.675970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.676109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.676146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.676291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.676340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.676482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.676528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.676708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.676746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.676860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.676897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.677067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.677104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.677284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.677322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.677473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.677506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.677639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.677673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.677826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.677863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.678038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.678074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.678219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.678258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.678420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.678455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.678584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.678636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.678783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.678821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.679024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.679063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.679197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.679234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.679443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.679546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.679579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.679706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.679739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.679846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.679900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.680137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.680175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.680332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.680375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.680531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.949 [2024-11-17 09:36:53.680564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.949 qpair failed and we were unable to recover it. 00:36:48.949 [2024-11-17 09:36:53.680668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.680720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.680894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.680931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.681084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.681123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.681291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.681326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.681475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.681509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.681621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.681654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.681810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.681846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.682019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.682055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.682195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.682232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.682473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.682511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.682611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.682644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.682779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.682812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.682956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.682992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.683162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.683199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.683344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.683388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.683513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.683546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.683642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.683675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.683833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.683866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.684026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.684062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.684237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.684273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.684452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.684485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.684585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.684617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.684746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.684780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.684958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.684995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.685192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.685229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.685392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.685425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.685559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.685593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.685768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.685804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.685976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.686012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.686182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.686218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.686365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.686407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.686537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.686570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.686726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.686758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.686881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.686929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.687083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.687118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.687251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.687283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.687416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.687449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.687607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.687640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.687793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.687829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.688031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.688067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.688213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.688251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.688474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.688507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.688618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.688651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.688748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.950 [2024-11-17 09:36:53.688797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.950 qpair failed and we were unable to recover it. 00:36:48.950 [2024-11-17 09:36:53.688945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.688982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.689208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.689245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.689390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.689424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.689581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.689629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.689777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.689813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.689955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.689995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.690156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.690190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.690296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.690330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.690460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.690494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.690625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.690802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.690836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.690952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.690986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.691167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.691206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.691423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.691457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.691563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.691595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.691717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.691749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.691911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.691944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.692070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.692102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.692243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.692279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.692447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.692481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.692641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.692674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.692815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.692849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.692958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.692991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.693148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.693181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.693282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.693316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.693467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.693500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.693628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.693661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.693822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.693854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.693962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.693994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.694128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.694160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.694321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.694356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.694513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.694547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.694684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.694718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.694881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.694915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.695046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.695080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.695190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.695223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.695358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.695400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.695561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.695594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.695723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.695755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.695857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.951 [2024-11-17 09:36:53.695890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.951 qpair failed and we were unable to recover it. 00:36:48.951 [2024-11-17 09:36:53.696022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.696054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.696153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.696204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.696321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.696524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.696556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.696694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.696728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.696855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.696892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.697051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.697083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.697187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.697219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.697365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.697423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.697573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.697609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.697740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.697774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.697920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.697955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.698060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.698094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.698253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.698287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.698405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.698440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.698576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.698608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.698744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.698776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.698941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.698976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.699149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.699184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.699333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.699375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.699514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.699549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.699707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.699741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.699908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.699943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.700091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.700126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.700289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.700323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.700463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.700496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.700654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.700687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.700795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.700829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.700962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.700994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.701165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.701210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.701376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.701428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.701533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.701566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.701740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.701775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.701933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.701966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.702091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.702124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.702259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.702292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.702425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.702460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.702621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.702653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.702785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.702818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.702942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.702975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.703079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.703112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.703297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.703346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.703529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.703565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.703699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.952 [2024-11-17 09:36:53.703733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.952 qpair failed and we were unable to recover it. 00:36:48.952 [2024-11-17 09:36:53.703868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.703902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.704067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.704106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.704213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.704246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.704408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.704443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.704557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.704590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.704722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.704754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.704889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.704922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.705052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.705244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.705276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.705392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.705428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.705586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.705619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.705721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.705754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.705915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.705949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.706082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.706116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.706272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.706305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.706421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.706455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.706588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.706620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.706754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.706786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.706919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.706951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.707084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.707116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.707240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.707272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.707376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.707411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.707519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.707553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.707712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.707745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.707878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.707911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.708046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.708079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.708213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.708246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.708387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.708421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.708527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.708567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.708700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.708732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.708868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.708901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.709078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.709110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.709236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.709269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.709408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.709442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.709578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.709612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.709747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.709781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.709910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.709943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.710075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.710109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.710244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.710278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.710442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.710478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.710637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.710670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.710824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.710856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.710972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.711005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.953 [2024-11-17 09:36:53.711137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.953 [2024-11-17 09:36:53.711171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.953 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.711288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.711320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.711499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.711534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.711663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.711697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.711830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.711868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.711977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.712125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.712263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.712490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.712637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.712810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.712960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.712994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.713110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.713143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.713279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.713312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.713424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.713592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.713624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.713741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.713774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.713906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.713939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.714077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.714220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.714364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.714542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.714707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.714852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.714978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.715122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.715294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.715469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.715606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.715759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.715893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.715925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.716027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.716060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.716163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.716195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.716330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.716363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.716518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.716550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.716727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.716780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.716899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.716936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.717099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.717133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.717279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.717317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.717502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.717538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.717652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.717687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.717797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.717832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.717962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.717994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.718102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.954 [2024-11-17 09:36:53.718134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.954 qpair failed and we were unable to recover it. 00:36:48.954 [2024-11-17 09:36:53.718244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.718276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.718380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.718414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.718549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.718582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.718691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.718733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.718872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.718906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.719048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.719081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.719197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.719231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.719428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.719546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.719588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.719700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.719735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.719839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.719871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.720906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.720938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.721054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.721089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.721247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.721286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.721449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.721483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.721596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.721635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.721768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.721801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.721944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.721977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.722150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.722184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.722320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.722352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.722466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.722498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.722598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.722631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.722737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.722769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.722883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.722916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.723046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.723079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.723212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.723245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.723382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.723415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.723548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.723580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.723710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.723742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.723849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.723881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.724017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.724050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.724177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.724209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.724343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.955 [2024-11-17 09:36:53.724382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.955 qpair failed and we were unable to recover it. 00:36:48.955 [2024-11-17 09:36:53.724496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.724528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.724657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.724691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.724812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.724845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.724954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.724986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.725094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.725127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.725260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.725291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.725441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.725475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.725582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.725615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.725747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.725779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.725896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.725929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.726961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.726994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.727096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.727128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.727245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.727293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.727424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.727460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.727575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.727610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.727737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.727884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.727923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.728897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.728929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.729026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.729059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.729194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.729227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.729331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.729364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.729510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.729543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.729720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.729754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.729885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.729917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.730028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.730061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.730194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.730231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.730385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.730437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.730580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.730616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.730728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.730761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.730910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.730944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.731075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.731109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.956 qpair failed and we were unable to recover it. 00:36:48.956 [2024-11-17 09:36:53.731220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.956 [2024-11-17 09:36:53.731264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.731365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.731414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.731588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.731695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.731738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.731856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.731896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.732034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.732070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.732200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.732234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.732365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.732407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.732537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.732570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.732705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.732739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.732877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.732911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.733048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.733083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.733219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.733254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.733398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.733433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.733535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.733569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.733741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.733775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.733895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.733928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.734067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.734227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.734419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.734561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.734707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.734856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.734975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.735150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.735325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.735474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.735614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.735785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.735922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.735955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.736120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.736154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.736255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.736288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.736428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.736463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.736566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.736600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.736702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.736736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.736835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.736869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.737002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.737036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.737165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.737200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.737353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.737408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.737560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.737597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.737703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.737744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.737884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.737918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.738054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.738179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.957 [2024-11-17 09:36:53.738212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.957 qpair failed and we were unable to recover it. 00:36:48.957 [2024-11-17 09:36:53.738380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.738415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.738547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.738580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.738751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.738784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.738919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.738953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.739089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.739122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.739255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.739289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.739444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.739480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.739582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.739616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.739748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.739781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.739891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.739924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.740098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.740268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.740418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.740558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.740700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.740882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.740994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.741132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.741301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.741456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.741614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.741785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.741924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.741956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.742084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.742117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.742249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.742282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.742554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.742587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.742774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.742822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.742975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.743012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.743151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.743186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.743302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.743337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.743485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.743519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.743661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.743695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.743804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.743839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.744956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.744989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.745095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.745129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.745244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.745277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.745389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.958 [2024-11-17 09:36:53.745423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.958 qpair failed and we were unable to recover it. 00:36:48.958 [2024-11-17 09:36:53.745578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.745611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.745746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.745779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.745885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.745918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.746066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.746100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.746204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.746236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.746376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.746409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.746545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.746577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.746704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.746738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.746867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.746899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.747042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.747074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.747205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.747238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.747348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.747403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.747520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.747553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.747685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.747718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.747852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.747884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.748017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.748049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.748186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.748218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.748345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.748391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.748506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.748539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.748671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.748703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.748809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.748842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.749007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.749056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.749175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.749212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.749346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.749389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.749504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.749695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.749730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.749837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.749883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.750878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.750910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.751038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.751070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.751203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.751236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.751363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.751403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.751506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.751539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.751676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.751723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.751910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.959 [2024-11-17 09:36:53.751950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.959 qpair failed and we were unable to recover it. 00:36:48.959 [2024-11-17 09:36:53.752071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.752110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.752263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.752301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.752436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.752470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.752575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.752608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.752751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.752785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.752912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.752948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.753059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.753095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.753218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.753258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.753423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.753458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.753567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.753601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.753737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.753771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.753911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.753967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.754137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.754176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.754299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.754337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.754520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.754565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.754728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.754782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.754940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.754990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.755161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.755197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.755356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.755401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.755573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.755704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.755742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.755965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.756008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.756171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.756208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.756344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.756422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.756542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.756577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.756739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.756773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.756886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.756920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.757070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.757119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.757301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.757356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.757533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.757570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.757730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.757769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.757884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.757922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.758069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.758107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.758257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.758297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.758457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.758492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.758607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.758641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.758744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.758776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.758904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.758936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.759057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.759123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.759359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.759422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.759571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.759607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.759709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.759744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.759981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.760103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.960 [2024-11-17 09:36:53.760154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.960 qpair failed and we were unable to recover it. 00:36:48.960 [2024-11-17 09:36:53.760335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.760536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.760571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.760710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.760744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.760846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.760879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.761065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.761122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.761262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.761296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.761418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.761454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.761576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.761617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.761756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.761790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.761898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.761933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.762064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.762102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.762248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.762286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.762422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.762458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.762589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.762621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.762747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.762783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.762901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.762938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.763164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.763202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.763351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.763410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.763526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.763561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.763723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.763762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.763927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.763966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.764094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.764131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.764291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.764325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.764441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.764475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.764640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.764674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.764809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.764847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.764987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.765024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.765148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.765198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.765343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.765393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.765504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.765537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.765696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.765734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.765852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.765889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.766003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.766040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.766194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.766234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.766404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.766450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.766610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.766662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.766823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.766881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.767045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.767102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.767256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.767290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.767431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.767466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.767614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.767762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.767798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.767929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.767979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.768137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.961 [2024-11-17 09:36:53.768191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.961 qpair failed and we were unable to recover it. 00:36:48.961 [2024-11-17 09:36:53.768319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.768351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.768519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.768551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.768690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.768730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.768879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.768923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.769098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.769154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.769279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.769317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.769459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.769493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.769598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.769630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.769725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.769758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.769884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.769919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.770049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.770084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.770247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.770297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.770415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.770449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.770554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.770588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.770709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.770773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.770907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.770960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.771108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.771146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.771311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.771347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.771473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.771507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.771615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.771649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.771775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.771810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.771941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.771977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.772087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.772125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.772315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.772377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.772541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.772578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.772693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.772729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.772866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.772900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.773032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.773069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.773176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.773215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.773381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.773416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.773601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.773646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.773817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.773875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.774101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.774141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.774294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.774334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.774516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.774564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.774696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.774731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.774861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.774896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.775005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.775043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.775177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.775210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.775361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.775417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.775544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.775580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.775724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.775761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.775872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.775924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.962 [2024-11-17 09:36:53.776040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.962 [2024-11-17 09:36:53.776096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.962 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.776261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.776298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.776427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.776479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.776661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.776698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.776820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.776856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.777000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.777037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.777163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.777197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.777334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.777396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.777516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.777553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.777704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.777740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.777857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.777901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.778039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.778076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.778253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.778289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.778475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.778511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.778672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.778720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.778879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.778916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.779086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.779220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.779409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.779556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.779693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.779856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.779999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.780036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.780195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.780246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.780382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.780417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.780555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.780590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.780701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.780736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.780897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.780932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.781074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.781248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.781417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.781557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.781719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.781875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.781988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.782026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.782156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.782205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.782350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.782394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.782504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.782538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.782693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.782730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.782883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.782934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.963 [2024-11-17 09:36:53.783048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.963 [2024-11-17 09:36:53.783090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.963 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.783198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.783232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.783405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.783439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.783574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.783607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.783756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.783790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.783927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.783960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.784063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.784096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.784234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.784267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.784422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.784456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.784563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.784597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.784738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.784771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.784868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.784901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.785007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.785040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.785208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.785257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.785409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.785458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.785624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.785659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.785820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.785854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.785964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.785998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.786105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.786139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.786241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.786276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.786401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.786435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.786541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.786575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.786695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.786728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.786836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.786870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.787005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.787038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.787169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.787205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.787313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.787347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.787509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.787557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.787704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.787742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.787866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.787901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.788067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.788237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.788385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.788537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.788710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.788876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.788979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.789155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.789295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.789478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.789618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.789788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.789958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.789991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.790103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.790136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.964 [2024-11-17 09:36:53.790240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.964 [2024-11-17 09:36:53.790273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.964 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.790384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.790419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.790525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.790559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.790700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.790734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.790839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.790873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.791013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.791046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.791182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.791217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.791356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.791396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.791521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.791555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.791692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.791726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.791863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.791896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.792062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.792204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.792346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.792522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.792684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.792860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.792970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.793003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.793109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.793142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.793299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.793332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.793482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.793518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.793654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.793688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.793788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.793822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.793988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.794131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.794304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.794485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.794634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.794799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.794963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.794995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.795126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.795160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.795297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.795330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.795447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.795484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.795603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.795637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.795740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.795774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.795907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.795942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.796074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.796109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.796254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.796289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.796418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.796452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.796564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.796597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.796727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.796759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.796865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.796897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.797039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.797071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.797202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.797234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.965 [2024-11-17 09:36:53.797380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.965 [2024-11-17 09:36:53.797417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.965 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.797557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.797591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.797702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.797736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.797852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.797886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.798021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.798054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.798165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.798199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.798326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.798360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.798506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.798539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.798672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.798706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.798839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.798873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.799949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.799983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.800119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.800153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.800292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.800326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.800474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.800512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.800651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.800683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.800787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.800821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.800927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.800959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.801091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.801124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.801261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.801296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.801408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.801442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.801571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.801620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.801763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.801798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.801928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.801962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.802097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.802133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.802271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.802307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.802447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.802482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.802654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.802688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.802805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.802839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.802943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.802978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.803114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.803148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.803274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.803322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.803480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.803642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.803676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.803844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.803884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.803986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.804020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.804139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.804175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.804317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.804351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.804468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.804500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.966 qpair failed and we were unable to recover it. 00:36:48.966 [2024-11-17 09:36:53.804674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.966 [2024-11-17 09:36:53.804707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.804864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.804897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.805030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.805064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.805164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.805196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.805385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.805535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.805572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.805705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.805740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.805899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.805934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.806041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.806076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.806221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.806257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.806443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.806589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.806623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.806724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.806756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.806883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.806916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.807023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.807055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.807222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.807264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.807375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.807411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.807581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.807614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.807775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.807809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.807944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.807979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.808137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.808172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.808280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.808315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.808446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.808483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.808605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.808639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.808770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.808804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.808930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.808964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.809115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.809151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.809285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.809332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.809501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.809549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.809726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.809762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.809872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.809905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.810050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.810082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.810215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.810248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.810407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.810455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.810623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.810659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.810796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.810832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.810951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.810985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.811127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.811161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.811269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.811303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.811439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.811475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.967 qpair failed and we were unable to recover it. 00:36:48.967 [2024-11-17 09:36:53.811635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.967 [2024-11-17 09:36:53.811669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.811804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.811838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.811939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.811973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.812072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.812106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.812270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.812305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.812444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.812493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.812618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.812655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.812800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.812833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.812965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.813000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.813139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.813334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.813373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.813512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.813701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.813750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.813908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.813945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.814058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.814095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.814228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.814266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.814375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.814413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.814549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.814583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.814725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.814759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.814867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.814901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.815010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.815044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.815196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.815231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.815376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.815412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.815526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.815564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.815717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.815752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.815867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.815901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.816063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.816097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.816232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.816265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.816382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.816423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.816566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.816601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.816708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.816742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.816870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.816903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.817033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.817066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.817198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.817230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.817407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.817442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.817545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.817579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.817723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.817770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.817901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.817938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.818049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.818084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.818196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.818231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.818389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.818434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.818546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.818582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.818739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.818775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.818884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.818917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.968 qpair failed and we were unable to recover it. 00:36:48.968 [2024-11-17 09:36:53.819054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.968 [2024-11-17 09:36:53.819088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.819221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.819254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.819356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.819402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.819560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.819593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.819730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.819762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.819900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.819933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.820061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.820095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.820230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.820266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.820419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.820454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.820597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.820640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.820743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.820777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.820940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.820979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.821115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.821253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.821399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.821568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.821701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.821829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.821991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.822137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.822375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.822510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.822644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.822790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.822995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.823104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.823139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.823276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.823463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.823512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.823633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.823672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.823810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.823845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.823984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.824019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.824148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.824182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.824323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.824357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.824506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.824541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.824686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.824723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.824866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.824901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.825075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.825224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.825410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.825551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.825723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.825870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.825975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.826009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.826125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.826166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.969 qpair failed and we were unable to recover it. 00:36:48.969 [2024-11-17 09:36:53.826305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.969 [2024-11-17 09:36:53.826339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.826472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.826520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.826668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.826704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.826844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.826878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.826981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.827119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.827262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.827419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.827564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.827764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.827937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.827971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.828082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.828115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.828221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.828254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.828412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.828551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.828587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.828734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.828768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.828912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.828946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.829078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.829112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.829224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.829264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.829420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.829470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.829629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.829664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.829809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.829843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.829976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.830009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.830158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.830192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.830331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.830365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.830491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.830525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.830651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.830684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.830830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.830864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.831003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.831049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.831186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.831380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.831414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.831572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.831606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.831746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.831949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.831990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.832114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.832149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.832284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.832317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.832432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.832466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.832599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.832632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.832731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.832763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.832888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.832920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.833057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.833094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.833230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.833265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.833409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.970 [2024-11-17 09:36:53.833444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.970 qpair failed and we were unable to recover it. 00:36:48.970 [2024-11-17 09:36:53.833549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.833584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.833735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.833784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.833942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.833978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.834149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.834298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.834432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.834596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.834735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.834879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.834991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.835024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.835126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.835162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.835288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.835322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.835448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.835483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.835647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.835691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.835807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.835840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.835999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.836172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.836309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.836469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.836611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.836755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.836921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.836954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.837076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.837112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.837246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.837280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.837414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.837448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.837583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.837617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.837759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.837795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.837933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.837967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.838111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.838279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.838434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.838609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.838742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.838882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.838992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.839027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.839186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.839220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.839351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.839391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.839495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.839529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.839663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.839697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.839804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.839838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.839975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.971 [2024-11-17 09:36:53.840010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.971 qpair failed and we were unable to recover it. 00:36:48.971 [2024-11-17 09:36:53.840120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.840154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.840287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.840320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.840439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.840474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.840578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.840617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.840775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.840809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.840914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.840948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.841114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.841149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.841273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.841322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.841452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.841486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.841594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.841627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.841760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.841793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.841900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.841933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.842036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.842068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.842187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.842223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.842361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.842403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.842539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.842573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.842687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.842721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.842890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.842925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.843052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.843085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.843190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.843225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.843381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.843429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.843544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.843581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.843698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.843732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.843856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.844024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.844058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.844191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.844225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.844362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.844403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.844538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.844572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.844747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.844784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.844928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.844963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.845101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.845136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.845249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.845283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.845396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.845430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.845594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.845627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.845764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.845798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.845938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.845972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.846141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.846174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.846292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.846326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.846439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.846473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.846609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.846642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.846772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.846805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.846939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.846972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.847129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.847163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.847264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.972 [2024-11-17 09:36:53.847301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.972 qpair failed and we were unable to recover it. 00:36:48.972 [2024-11-17 09:36:53.847436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.847470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.847575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.847608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.847742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.847775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.847877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.847910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.848047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.848081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.848240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.848273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.848404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.848438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.848575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.848608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.848742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.848775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.848912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.848946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.849094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.849127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.849258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.849291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.849401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.849435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.849540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.849574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.849717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.849750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.849883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.849916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.850024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.850057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.850191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.850223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.850376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.850425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.850550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.850587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.850733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.850782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.850929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.850966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.851105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.851139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.851306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.851340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.851453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.851488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.851619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.851652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.851767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.851800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.851923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.851956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.852091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.852124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.852218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.852251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.852355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.852399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.852557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.852589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.852723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.852756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.852889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.852924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.853060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.853093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.853262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.853404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.853442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.853594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.853629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.853764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.853799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.853958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.853997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.854111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.854145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.854320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.854378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.854487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.854522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.854635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.973 [2024-11-17 09:36:53.854668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.973 qpair failed and we were unable to recover it. 00:36:48.973 [2024-11-17 09:36:53.854795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.854828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.854929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.854962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.855092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.855125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.855272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.855307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.855544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.855661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.855697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.855808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.855843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.855951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.855986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.856150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.856184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.856328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.856362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.856490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.856527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.856661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.856696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.856829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.856864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.857057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.857206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.857397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.857564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.857708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.857864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.857996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.858030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.858168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.858201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.858324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.858356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.858524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.858571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.858688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.858724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.858865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.858899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.859961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.859994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.860123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.860156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.860257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.860297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.860442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.860478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.860605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.860644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.860782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.860834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.860968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.861003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.861148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.861182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.974 [2024-11-17 09:36:53.861294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.974 [2024-11-17 09:36:53.861327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.974 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.861482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.861518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.861654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.861687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.861834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.861867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.861971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.862132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.862269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.862412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.862545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.862712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.862899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.862934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.863040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.863073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.863224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.863272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.863412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.863461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.863606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.863643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.863808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.863849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.864922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.864955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.865115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.865161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.865319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.865354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.865497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.865545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.865693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.865730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.865869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.865905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.866080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.866256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.866425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.866565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.866852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.866985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.867018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.867157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.975 [2024-11-17 09:36:53.867193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.975 qpair failed and we were unable to recover it. 00:36:48.975 [2024-11-17 09:36:53.867337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.867384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.867530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.867565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.867748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.867782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.867917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.867955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.868130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.868189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.868314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.868347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.868505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.868638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.868672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.868778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.868811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.868945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.868978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.869081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.869114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.869235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.869284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.869407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.869445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.869582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.869617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.869735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.869769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.869922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.869962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.870137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.870175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.870333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.870376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.870488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.870521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.870650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.870683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.870794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.870827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.870960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.870992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.871093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.871126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.871228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.871264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.871422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.871457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.871570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.871604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.871721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.871755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.871911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.871958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.872123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.872165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.872316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.872350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.872485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.872520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.872660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.872695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.872834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.872879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.873010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.873044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.873183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.976 [2024-11-17 09:36:53.873216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.976 qpair failed and we were unable to recover it. 00:36:48.976 [2024-11-17 09:36:53.873391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.873426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.873561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.873596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.873734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.873769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.873906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.873940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.874056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.874091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.874225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.874265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.874401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.874436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.874569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.874604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.874719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.874753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.874890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.874924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.875033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.875067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.875225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.875259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.875425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.875459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.875606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.875654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.875770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.875806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.875960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.875999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.876148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.876202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.876343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.876388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.876501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.876534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.876654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.876690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.876828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.876862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.876973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.877006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.877143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.877177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.877291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.877324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.877466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.877500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.877629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.877663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.877795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.877829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.877985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.878022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.878178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.878218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.878336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.878395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.878502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.878536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.878641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.977 [2024-11-17 09:36:53.878675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.977 qpair failed and we were unable to recover it. 00:36:48.977 [2024-11-17 09:36:53.878817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.878855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.878958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.878993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.879130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.879164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.879270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.879304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.879436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.879471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.879601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.879634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.879796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.879829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.879970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.880007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.880168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.880209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.880330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.880364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.880516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.880551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.880651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.880685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.880802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.880835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.880972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.881115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.881259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.881398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.881538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.881667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.881816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.881850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.882042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.882082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.882200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.882238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.882375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.882409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.882545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.882579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.882691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.882726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.882861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.882895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.883954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.883988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.884096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.884131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.884294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.978 [2024-11-17 09:36:53.884346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.978 qpair failed and we were unable to recover it. 00:36:48.978 [2024-11-17 09:36:53.884505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.884540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.884650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.884685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.884813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.884863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.885025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.885082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.885221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.885256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.885392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.885431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.885582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.885630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.885767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.885802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.885935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.885969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.886084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.886119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.886258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.886292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.886399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.886434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.886609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.886643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.886747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.886782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.886919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.886953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.887055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.887090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.887208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.887242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.887378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.887430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.887567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.887601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.887744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.887778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.887933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.887968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.888099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.888144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.888277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.888311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.888418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.888453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.888600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.888648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.888766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.888802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.888909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.888942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.889077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.889110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.889245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.889279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.889394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.889429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.889574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.889610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.979 qpair failed and we were unable to recover it. 00:36:48.979 [2024-11-17 09:36:53.889719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.979 [2024-11-17 09:36:53.889753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.889891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.890104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.890277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.890428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.890562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.890696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.890862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.890971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.891117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.891281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.891451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.891602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.891742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.891912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.891950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.892078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.892111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.892300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.892338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.892501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.892535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.892684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.892718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.892828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.892862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.892973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.893107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.893302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.893459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.893625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.893750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.893886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.893920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.894073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.894106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.894248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.894281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.894444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.894477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.894609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.894642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.980 [2024-11-17 09:36:53.894776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.980 [2024-11-17 09:36:53.894809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.980 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.894940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.894973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.895106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.895139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.895274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.895306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.895425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.895460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.895628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.895661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.895830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.895863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.895993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.896026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.896135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.896169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.896295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.896328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.896485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.896520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.896657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.896691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.896828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.896862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.897035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.897225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.897363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.897515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.897679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.897841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.897973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.898006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.898105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.898138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.898292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.898329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.898509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.898543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.898670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.898707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.898839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.898872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.898981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.899124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.899287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.899489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.899627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.899773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.981 [2024-11-17 09:36:53.899942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.981 [2024-11-17 09:36:53.899976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.981 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.900082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.900115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.900250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.900290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.900456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.900505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.900656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.900693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.900843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.900878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.901027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.901063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.901199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.901233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.901381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.901415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.901576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.901610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.901776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.901810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.901932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.901967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.902108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.902143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.902282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.902432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.902466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.902600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.902633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.902766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.902800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.902931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.902965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.903099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.903132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.903236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.903270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.903411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.903445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.903556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.903591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.903724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.903758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.903925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.903959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.904094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.904128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.904267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.904300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.904437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.904472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.904593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.904629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.904736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.904770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.904897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.904930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.905069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.905102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.905260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.905293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.905464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.905617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.982 [2024-11-17 09:36:53.905652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.982 qpair failed and we were unable to recover it. 00:36:48.982 [2024-11-17 09:36:53.905791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.905825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.905922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.906116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.906150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.906313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.906347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.906497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.906532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.906639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.906673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.906784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.906817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.906973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.907006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.907111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.907145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.907279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.907313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.907457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.907492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.907630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.907666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.907833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.907866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.908962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.908996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.909119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.909152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.909286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.909320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.909497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.909531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.909684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.909719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.909855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.909889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.910075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.910110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.910270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.910304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.910472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.910506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.910611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.910645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.910784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.910818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.910950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.910983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.911141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.911175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.911314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.983 [2024-11-17 09:36:53.911348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.983 qpair failed and we were unable to recover it. 00:36:48.983 [2024-11-17 09:36:53.911460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.911493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.911629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.911662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.911790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.911824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.911924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.911957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.912068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.912101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.912221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.912276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.912425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.912463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.912632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.912667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.912803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.912837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.912972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.913005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.913137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.913170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.913304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.913337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.913491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.913524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.913632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.913665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.913789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.913822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.913978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.914012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.914172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.914206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.914361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.914417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.914543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.914580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.914726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.914762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.914909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.914945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.915080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.915114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.915273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.915307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.915458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.915493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.915605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.915638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.915776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.915809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.915967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.916000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.916110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.916143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.916324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.916361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.916532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.916565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.984 [2024-11-17 09:36:53.916687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.984 [2024-11-17 09:36:53.916721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.984 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.916881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.916914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.917051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.917084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.917222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.917255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.917390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.917425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.917557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.917590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.917742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.917790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.917947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.917984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.918113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.918147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.918286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.918320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.918439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.918474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.918636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.918684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.918825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.918860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.918997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.919030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.919172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.919206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.919340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.919389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.919559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.919593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.919729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.919763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.919924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.919958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.920083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.920116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.920255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.920288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.985 [2024-11-17 09:36:53.920400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.985 [2024-11-17 09:36:53.920433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.985 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.920594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.920628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.920744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.920777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.920894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.921051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.921085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.921192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.921243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.921424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.921473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.921594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.921631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.921771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.921805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.921938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.921971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.922090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.922126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.922237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.922271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.922398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.922447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.922580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.922616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.922756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.922791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.922948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.922982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.923113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.923147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.923257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.923293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.923431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.923467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:48.986 [2024-11-17 09:36:53.923606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.986 [2024-11-17 09:36:53.923639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:48.986 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.923750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.923783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.923897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.923930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.924027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.924060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.924197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.924246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.924392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.924429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.924527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.924561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.924691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.924725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.924899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.925042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.925211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.925394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.925544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.925716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.925862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.925971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.926121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.926259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.926486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.926625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.926765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.926933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.926967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.927144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.927182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.927295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.927329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.927442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.927475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.269 qpair failed and we were unable to recover it. 00:36:49.269 [2024-11-17 09:36:53.927613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.269 [2024-11-17 09:36:53.927647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.927786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.927819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.927979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.928117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.928278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.928457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.928599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.928768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.928937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.928971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.929111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.929145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.929307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.929341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.929457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.929492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.929629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.929664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.929830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.929877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.929993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.930028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.930140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.930174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.930301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.930335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.930475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.930524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.930711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.930747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.930879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.930912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.931042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.931076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.931233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.931271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.931460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.931495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.931660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.931696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.931822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.931856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.931995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.932029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.932193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.932235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.932381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.932415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.932569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.932617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.932785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.932821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.932947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.270 [2024-11-17 09:36:53.932991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.270 qpair failed and we were unable to recover it. 00:36:49.270 [2024-11-17 09:36:53.933191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.933257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.933441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.933490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.933636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.933672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.933818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.933869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.934001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.934037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.934177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.934213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.934427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.934476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.934655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.934703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.934864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.934903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.935079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.935129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.935270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.935304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.935467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.935501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.935613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.935649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.935772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.935810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.936023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.936062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.936218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.936257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.936397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.936448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.936561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.936594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.936745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.936785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.936962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.936999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.937169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.937206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.937353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.937420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.937563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.937596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.937750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.937788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.937930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.937967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.938117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.938168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.938333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.938381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.938523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.938572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.938760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.938797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.938944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.938980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.939166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.939204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.939352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.271 [2024-11-17 09:36:53.939416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.271 qpair failed and we were unable to recover it. 00:36:49.271 [2024-11-17 09:36:53.939560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.939594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.939698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.939732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.939859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.939911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.940096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.940162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.940341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.940380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.940513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.940546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.940691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.940728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.940892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.940937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.941088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.941125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.941275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.941314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.941490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.941526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.941655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.941810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.941848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.941974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.942011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.942137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.942175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.942316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.942354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.942601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.942751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.942789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.942973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.943011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.943159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.943197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.943331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.943365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.943515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.943549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.943709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.943747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.943893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.943932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.944074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.944112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.944281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.944315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.944448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.944497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.944648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.944684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.944878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.944916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.945060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.945098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.945257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.945322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.945483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.272 [2024-11-17 09:36:53.945526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.272 qpair failed and we were unable to recover it. 00:36:49.272 [2024-11-17 09:36:53.945672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.945711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.945899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.945961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.946211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.946249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.946404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.946438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.946547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.946581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.946788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.946842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.947075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.947127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.947267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.947302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.947482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.947518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.947630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.947687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.947825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.947863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.948043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.948081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.948211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.948244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.948378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.948413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.948551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.948585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.948774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.948814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.948994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.949032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.949147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.949184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.949406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.949549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.949584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.949750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.949784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.949938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.949972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.950104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.950143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.950297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.950334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.950513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.950556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.950743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.950796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.951089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.951150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.951303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.951341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.951480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.951514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.951669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.951718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.951850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.951887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.952007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.952353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.952515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.952576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.952717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.273 [2024-11-17 09:36:53.952753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.273 qpair failed and we were unable to recover it. 00:36:49.273 [2024-11-17 09:36:53.952860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.952894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.953049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.953200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.953235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.953381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.953432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.953559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.953608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.953752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.953788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.953924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.953958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.954099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.954134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.954289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.954328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.954509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.954558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.954695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.954729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.954893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.954927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.955030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.955080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.955222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.955404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.955444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.955557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.955592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.955730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.955764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.955958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.955993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.956135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.956171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.956302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.956352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.956493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.956532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.956668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.956703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.956861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.956894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.957013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.957050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.957192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.957229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.957415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.957464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.957608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.957644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.957811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.274 [2024-11-17 09:36:53.957846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.274 qpair failed and we were unable to recover it. 00:36:49.274 [2024-11-17 09:36:53.958009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.958042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.958262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.958296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.958457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.958502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.958612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.958647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.958814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.958849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.959012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.959047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.959196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.959233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.959447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.959481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.959640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.959673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.959804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.959838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.960022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.960058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.960177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.960214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.960354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.960399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.960554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.960587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.960706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.960744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.960864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.960917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.961058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.961092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.961304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.961338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.961493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.961541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.961676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.961719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.961877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.961931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.962098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.962150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.962310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.962347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.962488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.962522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.962657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.962691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.962824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.962875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.962989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.963026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.963200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.963237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.963401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.963434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.963551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.963602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.963757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.963809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.963952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.963988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.964126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.964327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.275 [2024-11-17 09:36:53.964377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.275 qpair failed and we were unable to recover it. 00:36:49.275 [2024-11-17 09:36:53.964554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.964602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.964875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.964912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.965076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.965111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.965272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.965309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.965475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.965510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.965647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.965681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.965809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.965842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.965991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.966028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.966192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.966228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.966428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.966477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.966610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.966650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.966801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.966858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.966977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.967012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.967169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.967208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.967381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.967418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.967561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.967595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.967753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.967792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.967966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.968003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.968192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.968245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.968412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.968448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.968581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.968616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.968756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.968790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.969013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.969071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.969213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.969265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.969374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.969409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.969555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.969597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.969783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.970012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.970050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.970206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.970241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.970407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.970455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.970616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.970656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.970894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.276 [2024-11-17 09:36:53.970947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.276 qpair failed and we were unable to recover it. 00:36:49.276 [2024-11-17 09:36:53.971158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.971216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.971347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.971391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.971499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.971533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.971648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.971929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.971984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.972195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.972233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.972403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.972439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.972565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.972600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.972742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.972776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.972937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.972971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.973124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.973162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.973316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.973355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.973539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.973588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.973783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.973849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.974067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.974101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.974269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.974306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.974472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.974507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.974659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.974692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.974830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.974863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.975053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.975110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.975289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.975326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.975481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.975515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.975683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.975716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.975811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.975844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.975970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.976004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.976136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.976175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.976314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.976351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.976539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.976587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.976772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.976826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.977017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.977052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.277 qpair failed and we were unable to recover it. 00:36:49.277 [2024-11-17 09:36:53.977204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.277 [2024-11-17 09:36:53.977239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.977365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.977409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.977536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.977571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.977723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.977767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.977988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.978048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.978186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.978223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.978413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.978447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.978557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.978590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.978723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.978770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.978914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.978969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.979209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.979244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.979353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.979577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.979611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.979777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.979812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.979978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.980014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.980177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.980214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.980391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.980425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.980529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.980563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.980696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.980730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.980850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.980887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.981033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.981070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.981176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.981213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.981352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.981398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.981538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.981571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.981703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.981736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.981871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.981904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.982046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.982079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.982254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.982302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.982465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.982503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.982641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.278 [2024-11-17 09:36:53.982677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.278 qpair failed and we were unable to recover it. 00:36:49.278 [2024-11-17 09:36:53.982830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.982864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.983001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.983034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.983170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.983203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.983362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.983403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.983531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.983564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.983695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.983728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.983884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.983918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.984050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.984083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.984216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.984249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.984382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.984418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.984549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.984706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.984741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.984898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.984932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.985065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.985104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.985236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.985271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.985418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.985452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.985611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.985644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.985776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.985810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.985942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.986067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.986100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.986227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.986261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.986428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.986463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.986574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.986608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.986776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.986810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.986923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.986957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.987142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.987180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.987327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.987365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.987515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.987549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.987682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.987716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.279 qpair failed and we were unable to recover it. 00:36:49.279 [2024-11-17 09:36:53.987840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.279 [2024-11-17 09:36:53.987874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.988042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.988076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.988187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.988220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.988355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.988396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.988505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.988537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.988666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.988699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.988873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.988907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.989008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.989041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.989138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.989171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.989322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.989379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.989591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.989738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.989774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.989917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.989951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.990114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.990148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.990254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.990287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.990389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.990425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.990542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.990590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.990735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.990771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.990918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.990953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.991093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.991127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.991296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.991333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.991466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.991501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.991636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.991670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.991763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.991797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.991935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.991974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.992184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.992221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.992333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.992387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.992548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.992581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.992710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.992744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.992859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.992893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.993055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.993090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.280 [2024-11-17 09:36:53.993189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.280 [2024-11-17 09:36:53.993222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.280 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.993389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.993423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.993524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.993688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.993721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.993824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.993858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.993994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.994027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.994195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.994230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.994336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.994378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.994514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.994548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.994680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.994713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.994846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.994880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.995018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.995054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.995227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.995262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.995395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.995429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.995559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.995592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.995749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.995782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.995910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.995943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.996074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.996106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.996234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.996273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.996462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.996497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.996637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.996672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.996783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.996817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.996956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.996989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.997104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.997139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.997284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.997318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.997460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.997495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.997641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.997674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.997833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.997866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.997990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.281 [2024-11-17 09:36:53.998023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.281 qpair failed and we were unable to recover it. 00:36:49.281 [2024-11-17 09:36:53.998128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.998163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.998331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.998371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.998532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.998567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.998701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.998735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.998901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.998939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.999115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.999265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.999432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.999574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.999730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:53.999900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:53.999998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.000030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.000166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.000202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.000356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.000423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.000574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.000610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.000781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.000816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.000955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.000990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.001123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.001157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.001298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.001332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.001469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.001518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.001664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.001701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.001839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.001875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.002004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.002038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.002196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.002231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.002393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.002441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.002570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.002625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.002793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.002829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.002981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.003016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.003167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.003202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.003358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.003397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.003531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.003566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.003686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.003720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.003849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.003883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.282 [2024-11-17 09:36:54.004077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.282 qpair failed and we were unable to recover it. 00:36:49.282 [2024-11-17 09:36:54.004215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.004249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.004387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.004421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.004522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.004556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.004670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.004704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.004807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.004842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.004983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.005017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.005154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.005187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.005318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.005351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.005518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.005566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.005740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.005788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.005935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.005976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.006144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.006178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.006289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.006322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.006468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.006503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.006633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.006667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.006804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.006839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.006978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.007013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.007150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.007185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.007334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.007373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.007484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.007518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.007651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.007684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.007845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.007878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.007987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.008021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.008187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.008222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.008349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.008434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.008609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.008645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.008782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.008815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.008949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.008982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.009143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.009177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.009310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.009342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.009463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.009511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.009652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.009688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.009853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.283 [2024-11-17 09:36:54.009887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.283 qpair failed and we were unable to recover it. 00:36:49.283 [2024-11-17 09:36:54.010033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.010068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.010165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.010199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.010336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.010398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.010544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.010579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.010729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.010763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.010906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.010940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.011067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.011101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.011253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.011291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.011445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.011479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.011611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.011644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.011746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.011780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.011910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.011942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.012085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.012118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.012225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.012258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.012380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.012429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.012566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.012602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.012753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.012787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.012893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.012933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.013089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.013123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.013261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.013294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.013434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.013470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.013606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.013639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.013803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.013835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.013967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.014112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.014309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.014451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.014588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.014770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.014894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.014926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.015048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.015096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.015267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.015308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.284 [2024-11-17 09:36:54.015509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.284 [2024-11-17 09:36:54.015557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.284 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.015710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.015744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.015851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.015883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.016043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.016076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.016208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.016241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.016399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.016433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.016548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.016585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.016762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.016822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.016963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.016999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.017135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.017170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.017308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.017342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.017483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.017517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.017656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.017826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.017858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.017998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.018030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.018164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.018197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.018295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.018328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.018499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.018531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.018636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.018667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.018825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.018858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.018988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.019020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.019173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.019212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.019411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.019460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.019576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.019613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.019744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.019779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.019916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.019956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.020088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.020122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.020266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.020301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.020460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.020509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.020623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.020659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.020801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.020836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.285 qpair failed and we were unable to recover it. 00:36:49.285 [2024-11-17 09:36:54.020971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.285 [2024-11-17 09:36:54.021006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.021168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.021201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.021332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.021373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.021514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.021548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.021651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.021685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.021793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.021827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.021932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.021967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.022130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.022164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.022313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.022362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.022510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.022558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.022699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.022734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.022866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.022899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.023037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.023071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.023199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.023234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.023417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.023453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.023633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.023682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.023851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.023886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.024028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.024199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.024234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.024373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.024408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.024556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.024591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.024697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.024730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.024893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.024925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.025053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.025086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.025193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.025228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.025325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.025358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.025541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.025590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.025751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.025786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.025920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.025953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.026052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.026086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.026248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.026281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.026443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.026477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.026614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.026647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.026748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.026781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.026912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.286 [2024-11-17 09:36:54.026950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.286 qpair failed and we were unable to recover it. 00:36:49.286 [2024-11-17 09:36:54.027064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.027097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.027275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.027314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.027485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.027522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.027630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.027666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.027834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.027868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.027997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.028029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.028179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.028216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.028363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.028421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.028566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.028783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.028831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.028979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.029016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.029176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.029211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.029344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.029398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.029573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.029734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.029768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.029910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.029944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.030103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.030137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.030268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.030303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.030439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.030474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.030614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.030648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.030806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.030840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.031007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.031134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.031167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.031282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.031316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.031455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.031489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.031645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.031694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.031861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.031906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.032139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.032193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.032321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.032361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.032493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.032528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.032690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.032724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.032860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.032895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.033008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.033045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.033268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.033306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.287 [2024-11-17 09:36:54.033425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.287 [2024-11-17 09:36:54.033477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.287 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.033589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.033622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.033780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.033850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.034037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.034095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.034225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.034264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.034457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.034497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.034656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.034690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.034882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.034941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.035202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.035260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.035417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.035450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.035566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.035602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.035751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.035789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.035913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.035950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.036159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.036223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.036358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.036400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.036510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.036543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.036674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.036707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.036857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.036893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.037037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.037074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.037255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.037294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.037441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.037476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.037636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.037669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.037796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.037830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.037953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.037992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.038112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.038160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.038353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.038395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.038523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.038557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.038703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.038751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.038922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.038958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.039086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.039125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.039243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.039281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.039447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.039496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.288 [2024-11-17 09:36:54.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.288 [2024-11-17 09:36:54.039672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.288 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.039875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.039947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.040183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.040221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.040414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.040448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.040576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.040610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.040761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.040798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.041044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.041080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.041226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.041265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.041414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.041448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.041592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.041640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.041787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.041823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.042011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.042049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.042198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.042235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.042389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.042446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.042590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.042623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.042751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.042784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.042921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.042954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.043139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.043175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.043330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.043379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.043536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.043569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.043728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.043762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.043872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.043925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.044125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.044187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.044384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.044417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.044576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.044610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.044712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.044745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.044919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.044976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.045119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.045156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.045285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.045340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.289 qpair failed and we were unable to recover it. 00:36:49.289 [2024-11-17 09:36:54.045519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.289 [2024-11-17 09:36:54.045555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.045683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.045731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.045851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.045886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.046091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.046153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.046301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.046337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.046541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.046698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.046731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.046881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.046915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.047047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.047079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.047263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.047300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.047435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.047470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.047623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.047671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.047832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.047871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.048050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.048088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.048272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.048432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.048467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.048642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.048692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.048807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.048842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.049001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.049038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.049151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.049188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.049325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.049362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.049494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.049528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.049663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.049697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.049823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.049856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.050035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.050076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.050225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.050264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.050419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.050454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.050591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.050625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.290 [2024-11-17 09:36:54.050794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.290 [2024-11-17 09:36:54.050827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.290 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.050939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.050973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.051077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.051111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.051246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.051281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.051455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.051489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.051617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.051649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.051808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.051842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.052002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.052035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.052192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.052224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.052358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.052399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.052543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.052577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.052767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.052923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.052956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.053153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.053188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.053297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.053341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.053492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.053526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.053658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.053691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.053827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.053859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.053970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.054003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.054161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.054208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.054363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.054411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.054576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.054611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.054740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.054774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.054932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.054977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.055157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.055197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.055449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.055656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.055818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.055852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.055981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.056014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.056120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.056156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.056363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.056404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.056553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.056601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.291 qpair failed and we were unable to recover it. 00:36:49.291 [2024-11-17 09:36:54.056719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.291 [2024-11-17 09:36:54.056755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.056962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.057024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.057157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.057237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.057436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.057471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.057577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.057616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.057732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.057765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.057898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.057930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.058181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.058239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.058377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.058412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.058578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.058613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.058769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.058807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.058948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.058986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.059130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.059168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.059334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.059382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.059558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.059607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.059781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.059817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.059956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.059994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.060140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.060178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.060387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.060453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.060596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.060632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.060771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.060805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.060972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.061005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.061256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.061310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.061499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.061533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.061686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.061735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.062014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.062074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.062210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.062249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.062392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.062443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.062580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.062614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.062719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.062753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.062885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.062920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.063036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.063080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.063269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.292 [2024-11-17 09:36:54.063308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.292 qpair failed and we were unable to recover it. 00:36:49.292 [2024-11-17 09:36:54.063447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.063482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.063658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.063695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.063838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.063889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.064076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.064113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.064240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.064279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.064390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.064440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.064622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.064661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.064898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.064937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.065084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.065122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.065278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.065312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.065444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.065478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.065636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.065684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.065881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.065920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.066065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.066101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.066276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.066312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.066480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.066513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.066635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.066704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.066869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.066909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.067059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.067098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.067237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.067275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.067435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.067470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.067623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.067662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.067801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.067839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.067970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.068003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.068164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.068201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.068401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.068435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.068582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.068615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.068766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.068802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.068916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.068953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.069085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.069135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.069283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.069334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.069517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.069567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.069757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.069948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.069987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.070110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.070148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.070316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.070353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.293 qpair failed and we were unable to recover it. 00:36:49.293 [2024-11-17 09:36:54.070521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.293 [2024-11-17 09:36:54.070554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.070663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.070696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.070847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.070885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.071026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.071059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.071256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.071295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.071426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.071460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.071563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.071615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.071737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.071770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.071987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.072045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.072192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.072230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.072423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.072459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.072569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.072753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.072790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.072982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.073037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.073181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.073218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.073342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.073382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.073562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.073597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.073732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.073766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.073946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.073983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.074111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.074150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.074343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.074428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.074578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.074614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.074792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.074827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.074986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.075020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.075119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.075172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.075338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.075405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.075614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.075662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.075830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.075865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.076028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.076062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.076210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.076243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.076382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.076416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.294 [2024-11-17 09:36:54.076569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.294 [2024-11-17 09:36:54.076618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.294 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.076772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.076809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.076973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.077008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.077145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.077179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.077317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.077352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.077528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.077562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.077700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.077734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.077867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.077900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.078068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.078242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.078383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.078530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.078698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.078848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.078981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.079015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.079190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.079225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.079364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.079418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.079563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.079597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.079711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.079746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.079840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.079872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.080009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.080041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.080160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.080214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.080409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.080442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.080574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.080606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.080739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.080773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.080892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.080925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.081054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.081087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.081227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.081261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.081424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.081458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.081564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.295 [2024-11-17 09:36:54.081598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.295 qpair failed and we were unable to recover it. 00:36:49.295 [2024-11-17 09:36:54.081736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.081769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.081895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.081928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.082061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.082094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.082256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.082289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.082421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.082455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.082562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.082596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.082773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.082822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.082978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.083015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.083187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.083222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.083332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.083375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.083540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.083574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.083718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.083754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.083887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.083921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.084033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.084067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.084231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.084264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.084402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.084436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.084549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.084583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.084725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.084759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.084886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.084919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.085064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.085097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.085257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.085295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.085450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.085504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.085623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.085658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.085804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.085946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.085981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.086140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.086174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.086300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.086477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.086512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.086671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.086704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.086836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.296 [2024-11-17 09:36:54.086869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.296 qpair failed and we were unable to recover it. 00:36:49.296 [2024-11-17 09:36:54.087001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.087034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.087168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.087201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.087327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.087360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.087504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.087537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.087670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.087703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.087843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.087876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.088013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.088049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.088210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.088244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.088386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.088421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.088564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.088598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.088758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.088792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.088952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.088985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.089148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.089182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.089346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.089388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.089497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.089530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.089667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.089702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.089851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.089884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.089997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.090031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.090199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.090234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.090375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.090408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.090540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.090574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.090739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.090773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.090898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.090931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.091031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.091065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.091201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.091236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.091397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.091431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.091568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.091602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.091759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.091794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.091929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.091964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.092122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.092156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.092262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.297 [2024-11-17 09:36:54.092296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.297 qpair failed and we were unable to recover it. 00:36:49.297 [2024-11-17 09:36:54.092401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.092439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.092595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.092628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.092760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.092793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.092927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.092962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.093068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.093101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.093268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.093303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.093469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.093504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.093661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.093695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.093851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.093885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.094024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.094059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.094199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.094234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.094400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.094435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.094577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.094610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.094772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.094805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.094921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.094954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.095085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.095118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.095247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.095280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.095440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.095475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.095636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.095670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.095836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.096004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.096038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.096148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.096182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.096302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.096341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.096532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.096566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.096690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.096723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.096848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.096881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.097017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.097050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.097166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.097199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.097326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.097359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.097503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.097536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.097694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.298 [2024-11-17 09:36:54.097727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.298 qpair failed and we were unable to recover it. 00:36:49.298 [2024-11-17 09:36:54.097860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.097894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.098065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.098100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.098232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.098266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.098411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.098445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.098581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.098615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.098773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.098806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.098932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.098966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.099064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.099097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.099200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.099233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.099376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.099414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.099546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.099580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.099736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.099785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.099930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.099966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.100100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.100136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.100271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.100305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.100452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.100486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.100613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.100647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.100793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.100828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.100953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.100986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.101088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.101121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.101257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.101290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.101387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.101421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.101551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.101699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.101733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.101895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.101928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.102075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.102108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.102248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.102284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.102426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.102461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.102616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.102664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.102777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.102811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.103003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.103037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.103172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.299 [2024-11-17 09:36:54.103206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.299 qpair failed and we were unable to recover it. 00:36:49.299 [2024-11-17 09:36:54.103323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.103359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.103535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.103569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.103730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.103763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.103925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.103958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.104098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.104132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.104245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.104279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.104393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.104452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.104638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.104783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.104818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.104948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.104981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.105139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.105173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.105279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.105314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.105476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.105524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.105685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.105721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.105845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.105879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.106012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.106045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.106183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.106218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.106350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.106398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.106539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.106587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.106754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.106790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.106915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.106949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.107052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.107085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.107267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.107304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.107452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.107487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.107641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.107674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.107808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.107841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.107969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.108002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.108163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.108210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.108327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.108381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.108520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.108555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.108727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.108892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.108926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.109057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.109091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.109236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.109271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.300 qpair failed and we were unable to recover it. 00:36:49.300 [2024-11-17 09:36:54.109385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.300 [2024-11-17 09:36:54.109419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.109564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.109600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.109737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.109771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.109905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.109940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.110082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.110116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.110246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.110280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.110415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.110449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.110609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.110643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.110805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.110839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.111000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.111033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.111145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.111179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.111350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.111390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.111498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.111532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.111692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.111728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.111861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.111894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.112026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.112060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.112196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.112230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.112380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.112414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.112551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.112585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.112716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.112750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.112885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.112920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.113049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.113083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.113211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.113244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.113355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.113403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.113536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.113570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.113694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.113743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.113889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.301 [2024-11-17 09:36:54.113925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.301 qpair failed and we were unable to recover it. 00:36:49.301 [2024-11-17 09:36:54.114062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.114096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.114233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.114266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.114405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.114439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.114578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.114612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.114747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.114782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.114916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.114950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.115087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.115121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.115256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.115290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.115451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.115485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.115646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.115679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.115823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.115857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.115957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.115990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.116156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.116189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.116327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.116361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.116552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.116600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.116747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.116791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.116932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.116968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.117106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.117140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.117326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.117364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.117553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.117586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.117721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.117756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.117925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.117958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.118086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.118120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.118251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.118411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.118445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.118599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.118648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.118819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.118855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.118968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.119004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.119177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.119211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.119363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.119419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.119563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.119599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.119757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.119793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.119953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.302 [2024-11-17 09:36:54.119987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.302 qpair failed and we were unable to recover it. 00:36:49.302 [2024-11-17 09:36:54.120132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.120165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.120294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.120342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.120487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.120535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.120681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.120724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.120867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.120902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.121038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.121073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.121212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.121246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.121381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.121415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.121554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.121588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.121746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.121780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.121916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.122117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.122150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.122314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.122499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.122547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.122700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.122736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.122840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.122874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.123037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.123206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.123341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.123521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.123684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.123859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.123998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.124033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.124191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.124224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.124327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.124360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.124464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.124497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.124639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.124687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.124831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.124867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.125001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.125036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.125131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.125166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.125318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.125365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.125500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.125536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.125673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.125707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.125844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.125878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.126023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.303 [2024-11-17 09:36:54.126056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.303 qpair failed and we were unable to recover it. 00:36:49.303 [2024-11-17 09:36:54.126187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.126220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.126357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.126399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.126537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.126571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.126711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.126746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.126849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.126884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.127058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.127207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.127247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.127445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.127482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.127624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.127673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.127812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.127846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.127949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.127984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.128169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.128206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.128356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.128416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.128554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.128590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.128725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.128758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.128861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.128912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.129151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.129189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.129380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.129415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.129566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.129614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.129815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.129868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.130017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.130056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.130217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.130255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.130421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.130455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.130580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.130618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.130798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.130835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.130970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.131007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.131122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.131159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.131309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.131346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.131566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.131730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.131763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.131895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.131928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.132063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.132096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.132210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.132247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.132360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.132420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.132523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.132556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.132702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.132746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.132925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.304 [2024-11-17 09:36:54.132983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.304 qpair failed and we were unable to recover it. 00:36:49.304 [2024-11-17 09:36:54.133142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.133190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.133340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.133383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.133523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.133556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.133694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.133727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.133890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.133923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.134074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.134139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.134319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.134356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.134511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.134549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.134714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.134748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.134956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.135020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.135281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.135335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.135504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.135543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.135652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.135686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.135819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.135852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.136023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.136149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.136186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.136373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.136434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.136594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.136641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.136814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.136855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.137104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.137144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.137291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.137330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.137494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.137529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.137643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.137678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.137803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.137877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.138089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.138146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.138308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.138346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.138501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.138549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.138686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.138727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.138939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.139005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.139127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.139165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.139302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.139336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.139508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.139555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.139702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.305 [2024-11-17 09:36:54.139737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.305 qpair failed and we were unable to recover it. 00:36:49.305 [2024-11-17 09:36:54.139869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.139906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.140079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.140117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.140219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.140256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.140415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.140450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.140565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.140600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.140746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.140782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.140936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.140975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.141124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.141163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.141315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.141354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.141546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.141580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.141704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.141743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.141922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.141960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.142137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.142175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.142297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.142336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.142497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.142531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.142647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.142680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.142817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.142870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.143432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.143471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.143618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.143658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.143775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.143808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.143955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.144004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.144173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.144213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.144402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.144437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.144543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.144577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.144709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.144742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.144875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.144909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.145008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.145041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.145201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.145414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.145449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.306 [2024-11-17 09:36:54.145617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.306 [2024-11-17 09:36:54.145652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.306 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.145791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.145826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.145976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.146012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.146196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.146234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.146383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.146437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.146567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.146600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.146711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.146745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.146885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.146922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.147058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.147095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.147239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.147276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.147435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.147469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.147611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.147644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.147767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.147818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.148024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.148061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.148207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.148244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.148397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.148447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.148576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.148620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.148803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.148862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.149055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.149118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.149295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.149334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.149518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.149568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.149714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.149751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.149898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.149933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.150069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.150103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.150231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.150282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.150402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.150437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.150576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.150610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.150716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.150751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.150884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.150918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.151056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.151095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.151276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.151309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.151451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.151500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.151668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.151704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.307 qpair failed and we were unable to recover it. 00:36:49.307 [2024-11-17 09:36:54.151863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.307 [2024-11-17 09:36:54.151900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.152020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.152057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.152223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.152300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.152479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.152516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.152619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.152654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.152815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.152849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.153045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.153079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.153232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.153269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.153432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.153468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.153598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.153631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.153766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.153800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.153933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.153985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.154129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.154167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.154316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.154357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.154561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.154595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.154755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.154789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.154960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.154998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.155127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.155161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.155357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.155396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.155509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.155544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.155724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.155772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.155903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.155942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.156090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.156129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.156282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.156320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.156478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.156512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.156623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.156659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.156804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.156838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.156976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.157067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.157212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.157250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.157439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.157488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.157633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.308 [2024-11-17 09:36:54.157668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.308 qpair failed and we were unable to recover it. 00:36:49.308 [2024-11-17 09:36:54.157794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.157828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.157927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.157961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.158114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.158151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.158304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.158343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.158514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.158549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.158653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.158686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.158834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.159029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.159062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.159182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.159219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.159394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.159429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.159555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.159603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.159770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.159806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.159942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.159976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.160078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.160112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.160274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.160312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.160458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.160494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.160617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.160652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.160824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.160929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.160962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.161134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.161167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.161327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.161504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.161621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.161657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.161798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.161833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.161991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.162161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.162293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.162464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.162609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.162781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.162915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.309 [2024-11-17 09:36:54.162949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.309 qpair failed and we were unable to recover it. 00:36:49.309 [2024-11-17 09:36:54.163093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.163127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.163281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.163326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.163506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.163553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.163738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.163774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.163881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.163916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.164047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.164081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.164214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.164249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.164388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.164427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.164532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.164570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.164732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.164765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.164868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.164902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.165064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.165100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.165211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.165246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.165418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.165552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.165586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.165723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.165756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.165892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.165928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.166100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.166272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.166443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.166589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.166727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.166892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.166994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.167029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.167168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.167203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.167364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.167404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.167513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.167547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.167734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.167768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.167907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.167951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.168058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.168091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.168250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.168288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.168454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.168488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.310 [2024-11-17 09:36:54.168592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.310 [2024-11-17 09:36:54.168626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.310 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.168734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.168768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.168926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.168959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.169073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.169108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.169216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.169251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.169392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.169426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.169541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.169574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.169708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.169885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.169919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.170065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.170104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.170242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.170277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.170423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.170459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.170574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.170738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.170772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.170871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.171042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.171076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.171231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.171279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.171426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.171463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.171604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.171638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.171775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.171810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.171974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.172038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.172167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.172201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.172360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.172416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.172592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.311 [2024-11-17 09:36:54.172629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.311 qpair failed and we were unable to recover it. 00:36:49.311 [2024-11-17 09:36:54.172741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.172777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.172919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.172953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.173081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.173115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.173257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.173290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.173454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.173503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.173645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.173680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.173795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.173831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.173970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.174004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.174131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.174164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.174322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.174356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.174515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.174549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.174683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.174716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.174857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.174891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.175026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.175060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.175222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.175255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.175381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.175429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.175573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.175610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.175780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.175814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.175919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.175954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.176121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.176155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.176291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.176325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.176452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.176487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.176624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.176658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.176792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.176825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.176929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.176962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.177122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.177160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.177319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.177355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.177517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.177552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.177682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.177716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.177873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.177906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.178037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.178071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.178235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.178269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.178402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.178436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.178547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.178582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.312 [2024-11-17 09:36:54.178733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.312 [2024-11-17 09:36:54.178781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.312 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.178933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.178969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.179134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.179169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.179283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.179316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.179423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.179457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.179570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.179604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.179738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.179771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.179903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.179936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.180108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.180142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.180263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.180299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.180417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.180452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.180615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.180648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.180749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.180783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.180945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.180979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.181112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.181145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.181256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.181290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.181453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.181486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.181650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.181683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.181827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.181861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.181996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.182030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.182168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.182202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.182325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.182380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.182533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.182570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.182751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.182799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.182942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.182976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.183111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.183145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.183308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.183341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.183514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.183548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.183650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.183684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.183785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.183818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.183928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.183961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.184096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.184134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.184249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.184282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.184445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.184479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.184579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.184613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.313 qpair failed and we were unable to recover it. 00:36:49.313 [2024-11-17 09:36:54.184707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.313 [2024-11-17 09:36:54.184741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.184841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.184874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.184988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.185020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.185151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.185198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.185318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.185366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.185546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.185719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.185753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.185892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.185927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.186060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.186093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.186239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.186273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.186418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.186452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.186554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.186588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.186718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.186751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.186909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.186943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.187087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.187120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.187261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.187296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.187487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.187536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.187684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.187720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.187831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.187866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.188033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.188067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.188252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.188289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.188488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.188523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.188690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.188725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.188867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.188900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.189034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.189068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.189201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.189236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.189376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.189410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.189570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.189603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.189745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.189779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.189909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.189942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.190044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.190078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.190242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.190276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.190382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.190416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.190520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.190557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.190692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.190725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.190885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.190919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.191084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.191123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.191261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.191296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.191433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.314 [2024-11-17 09:36:54.191467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.314 qpair failed and we were unable to recover it. 00:36:49.314 [2024-11-17 09:36:54.191601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.191635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.191796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.191829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.191964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.191997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.192107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.192141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.192277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.192312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.192486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.192521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.192653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.192687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.192848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.192882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.193017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.193052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.193166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.193201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.193350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.193407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.193584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.193619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.193758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.193792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.193920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.193954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.194090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.194123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.194262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.194296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.194424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.194472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.194617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.194653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.194797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.194831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.194973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.195008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.195143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.195176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.195339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.195379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.195514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.195548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.195685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.195718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.195890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.195923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.196066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.196101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.196215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.196248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.196412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.196446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.196558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.196592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.196748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.196781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.196935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.196968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.197090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.197123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.197269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.197303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.197431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.197465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.197615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.197664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.315 [2024-11-17 09:36:54.197844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.315 [2024-11-17 09:36:54.197880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.315 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.198018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.198053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.198189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.198227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.198361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.198400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.198563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.198596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.198734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.198769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.198906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.198939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.199068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.199101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.199259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.199292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.199397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.199431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.199567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.199600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.199713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.199749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.199856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.199896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.200036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.200069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.200201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.200236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.200496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.200531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.200680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.200713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.200841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.200874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.200976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.201009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.201122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.201156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.201313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.201362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.201523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.201561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.201712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.201760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.201914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.201948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.202044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.202077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.202181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.202214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.202336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.202378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.202541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.202575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.202709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.202743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.202883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.202916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.203060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.203094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.203229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.316 [2024-11-17 09:36:54.203262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.316 qpair failed and we were unable to recover it. 00:36:49.316 [2024-11-17 09:36:54.203381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.203415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.203517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.203549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.203679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.203712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.203846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.203880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.203983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.204017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.204178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.204211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.204337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.204393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.204538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.204574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.204743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.204777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.204915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.204949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.205085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.205125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.205224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.205258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.205392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.205427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.205584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.205617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.205751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.205784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.205883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.205917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.206051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.206085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.206242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.206290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.206411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.206449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.206589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.206624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.206787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.206822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.206953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.206987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.207119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.207152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.207310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.207344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.207506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.207547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.207739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.207780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.208005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.208057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.208273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.208308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.208499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.208535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.208688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.208736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.208859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.208895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.209009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.209043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.209200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.209237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.209442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.209491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.209633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.209669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.209825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.209863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.210036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.210074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.210233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.210278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.317 qpair failed and we were unable to recover it. 00:36:49.317 [2024-11-17 09:36:54.210443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.317 [2024-11-17 09:36:54.210480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.210622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.210656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.210856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.210932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.211089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.211149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.211304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.211338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.211528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.211576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.211741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.211789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.211951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.211991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.212162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.212223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.212341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.212389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.212544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.212578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.212730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.212769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.212968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.213011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.213197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.213234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.213424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.213458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.213572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.213606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.213742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.213790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.213932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.213969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.214114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.214151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.214319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.214377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.214530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.214567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.214722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.214773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.214928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.214986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.215127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.215192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.215335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.215377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.215514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.215549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.215666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.215699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.215871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.215910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.216044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.216078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.216219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.216255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.216468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.216523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.216673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.216718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.216854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.216888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.217026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.217060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.217201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.217237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.217389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.217437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.217552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.217588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.217724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.217762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.217913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.217950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.318 [2024-11-17 09:36:54.218189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.318 [2024-11-17 09:36:54.218238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.318 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.218392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.218428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.218650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.218701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.218803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.218837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.219039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.219092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.219257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.219291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.219427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.219462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.219596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.219629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.219792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.219825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.219966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.220000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.220164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.220197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.220339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.220397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.220558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.220612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.220775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.220966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.221004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.221141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.221178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.221329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.221379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.221540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.221577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.221689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.221723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.221887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.221938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.222065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.222102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.222265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.222312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.222462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.222498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.222652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.222690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.222854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.222891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.223011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.223049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.223201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.223240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.223496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.223545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.223710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.223763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.223904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.223945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.224132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.224187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.224327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.224361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.224541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.224577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.224683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.224716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.224816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.224849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.225006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.225039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.225210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.225258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.225409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.225446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.225608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.225665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.225826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.225878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.226048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.226091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.319 [2024-11-17 09:36:54.226251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.319 [2024-11-17 09:36:54.226290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.319 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.226454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.226490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.226652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.226689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.226853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.226890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.227063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.227113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.227249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.227282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.227467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.227515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.227674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.227713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.227843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.227898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.228037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.228071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.228239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.228273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.228429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.228463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.228603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.228645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.228801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.228839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.229037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.229075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.229195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.229232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.229406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.229439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.229571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.229604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.229757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.229794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.229940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.229977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.230193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.230258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.230400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.230436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.230601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.230634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.230806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.230970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.231008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.231144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.231178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.231290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.231324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.231458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.231493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.231600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.231648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.231766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.231804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.231984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.232021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.232161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.232199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.232349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.232412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.232567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.232618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.232791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.232828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.232985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.233037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.233183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.233220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.233359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.233406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.233566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.233614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.233766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.233835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.234032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.234070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.234214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.320 [2024-11-17 09:36:54.234264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.320 qpair failed and we were unable to recover it. 00:36:49.320 [2024-11-17 09:36:54.234399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.234432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.234559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.234592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.234690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.234723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.234870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.234908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.235054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.235092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.235237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.235274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.235432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.235465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.235605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.235638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.235790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.235827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.235943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.235980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.236131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.236174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.236314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.236351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.236507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.236540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.236700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.236733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.236863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.236896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.237028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.237061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.237223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.237259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.237422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.237456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.237637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.237685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.237817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.237857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.237995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.238032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.238180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.238219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.238402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.238437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.238616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.238665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.238817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.238851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.239008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.239057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.321 qpair failed and we were unable to recover it. 00:36:49.321 [2024-11-17 09:36:54.239215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.321 [2024-11-17 09:36:54.239267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.239400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.239434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.239595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.239628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.239778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.239814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.240012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.240069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.240215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.240252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.240425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.240461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.240571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.240604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.240792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.241035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.241072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.241240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.241277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.241433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.241478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.241681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.241719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.241836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.241874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.242085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.242143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.242273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.242324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.242461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.242494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.242632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.242665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.242833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.242889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.243008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.243045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.243196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.243235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.243382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.243436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.243583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.243630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.243778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.243813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.243964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.244007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.244117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.244166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.244323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.244360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.244503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.244537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.244643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.244677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.244823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.244856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.244999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.245035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.245181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.245218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.245361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.245409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.245563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.245596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.245759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.245793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.245972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.246009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.246157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.246196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.246345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.246393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.246546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.246580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.246733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.246770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.246918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.246956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.247097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.247134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.247307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.322 [2024-11-17 09:36:54.247345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.322 qpair failed and we were unable to recover it. 00:36:49.322 [2024-11-17 09:36:54.247507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.247540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.247698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.247731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.247849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.247885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.248031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.248067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.248173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.248210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.248333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.248374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.248529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.248578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.248748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.248784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.248957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.249021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.249189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.249227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.249375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.249445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.249591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.249629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.249768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.249813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.250086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.250145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.250262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.250298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.250474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.250508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.250642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.250674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.250813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.250845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.250993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.251069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.251182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.251218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.251345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.251390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.251517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.251550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.251702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.251736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.251847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.251880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.251995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.252031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.252194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.252248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.252443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.252492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.252649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.252685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.252825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.252880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.253052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.253102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.253258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.253295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.253471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.253506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.253651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.253689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.253868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.253904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.254033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.254080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.254216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.254250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.254378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.254410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.323 [2024-11-17 09:36:54.254562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.323 [2024-11-17 09:36:54.254594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.323 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.254703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.254736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.254885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.254922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.255035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.255070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.255215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.255252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.255388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.255421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.255584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.255617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.255747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.255780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.255913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.255950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.256088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.256124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.256236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.256273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.256456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.256494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.256599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.256632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.256767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.256799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.256959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.256996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.257117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.257153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.257272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.257307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.257467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.257501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.257638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.257670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.257773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.257824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.257952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.257990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.258110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.258145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.258317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.258352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.258483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.258516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.258638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.258687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.258846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.258972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.259010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.259187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.259224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.259386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.605 [2024-11-17 09:36:54.259420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.605 qpair failed and we were unable to recover it. 00:36:49.605 [2024-11-17 09:36:54.259560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.259593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.259723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.259757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.259920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.259953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.260135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.260190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.260302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.260338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.260501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.260538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.260707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.260740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.260874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.260907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.261027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.261063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.261193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.261226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.261404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.261452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.261561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.261597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.261767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.261805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.261987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.262025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.262141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.262180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.262333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.262373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.262510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.262545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.262698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.262746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.262880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.262921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.263107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.263170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.263315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.263376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.263484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.263518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.263649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.263689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.263838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.263872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.264002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.264036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.264166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.264199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.264341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.264385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.264523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.264557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.264665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.264698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.264832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.264865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.265005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.265038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.265149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.265186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.265347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.265404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.265544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.265581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.265740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.265775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.265941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.606 [2024-11-17 09:36:54.265975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.606 qpair failed and we were unable to recover it. 00:36:49.606 [2024-11-17 09:36:54.266117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.266151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.266291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.266325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.266506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.266540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.266657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.266690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.266822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.266856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.266992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.267026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.267158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.267190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.267323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.267359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.267497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.267545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.267695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.267731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.267866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.267901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.268034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.268069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.268236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.268270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.268475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.268638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.268681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.268817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.268850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.269011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.269044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.269183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.269215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.269360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.269422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.269527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.269559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.269699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.269731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.269887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.269920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.270087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.270224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.270402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.270533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.270731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.270864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.270976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.271011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.271144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.271178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.271342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.271383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.271537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.271584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.271742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.271778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.271891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.271925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.272065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.272099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.272234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.272270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.272394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.607 [2024-11-17 09:36:54.272429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.607 qpair failed and we were unable to recover it. 00:36:49.607 [2024-11-17 09:36:54.272570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.272605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.272750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.272782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.272950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.272984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.273125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.273160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.273295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.273329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.273523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.273570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.273820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.273881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.274156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.274224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.274395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.274435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.274560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.274614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.274772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.274811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.275016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.275054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.275194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.275232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.275449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.275498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.275626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.275681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.275845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.275885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.276045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.276084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.276224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.276262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.276428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.276462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.276590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.276630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.276784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.276822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.277019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.277166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.277204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.277361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.277425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.277561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.277596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.277711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.277764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.277911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.277948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.278128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.278166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.278283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.278321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.278503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.278557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.278751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.278790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.278942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.278980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.279101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.279138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.279292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.279327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.279500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.279548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.279737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.279773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.279927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.279965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.608 [2024-11-17 09:36:54.280072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.608 [2024-11-17 09:36:54.280110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.608 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.280257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.280295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.280462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.280497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.280626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.280660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.280840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.280988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.281025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.281159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.281193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.281327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.281365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.281544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.281581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.281743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.281797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.281958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.281998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.282115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.282153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.282263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.282300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.282427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.282460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.282595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.282628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.282793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.282827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.282958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.283011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.283160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.283198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.283345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.283394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.283555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.283589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.283757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.283796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.283979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.284016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.284166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.284203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.284356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.284398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.284538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.284573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.284722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.284770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.284927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.284966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.285089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.285127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.285283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.285316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.285493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.285528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.285711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.285747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.285876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.609 [2024-11-17 09:36:54.285910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.609 qpair failed and we were unable to recover it. 00:36:49.609 [2024-11-17 09:36:54.286051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.286117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.286267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.286305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.286496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.286544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.286693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.286729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.286835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.286868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.287006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.287040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.287236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.287274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.287415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.287451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.287617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.287658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.287806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.287844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.288017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.288055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.288162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.288198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.288364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.288407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.288542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.288576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.288690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.288724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.288876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.288913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.289060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.289099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.289227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.289266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.289455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.289489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.289627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.289662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.289831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.289863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.290114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.290178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.290327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.290364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.290574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.290707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.290741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.290901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.290936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.291070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.291103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.291270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.291329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.291507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.291556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.291734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.291787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.291986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.292052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.292222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.292260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.292400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.292452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.292585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.292652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.292813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.292850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.292990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.293159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.610 [2024-11-17 09:36:54.293197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.610 qpair failed and we were unable to recover it. 00:36:49.610 [2024-11-17 09:36:54.293351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.293395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.293520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.293570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.293718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.293752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.293954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.294096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.294133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.294297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.294332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.294495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.294545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.294688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.294743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.294966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.295023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.295187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.295244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.295422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.295457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.295563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.295596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.295749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.295787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.295968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.296006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.296143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.296208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.296377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.296432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.296548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.296601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.296807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.296861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.297110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.297177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.297311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.297522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.297557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.297700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.297735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.297900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.297949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.298157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.298197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.298341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.298386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.298565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.298781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.298820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.298938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.298976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.299124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.299162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.299319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.299353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.299494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.299543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.299687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.299724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.299901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.299962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.300160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.300221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.300402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.300452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.300568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.300605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.300807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.300866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.611 [2024-11-17 09:36:54.301070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.611 [2024-11-17 09:36:54.301126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.611 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.301269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.301306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.301488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.301537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.301666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.301716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.301947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.302001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.302234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.302271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.302385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.302426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.302581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.302632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.302778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.302813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.302944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.302997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.303136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.303171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.303275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.303310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.303471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.303510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.303677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.303726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.303876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.303924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.304037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.304073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.304217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.304252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.304400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.304449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.304619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.304684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.304934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.304997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.305138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.305172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.305386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.305590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.305653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.305828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.305928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.306082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.306154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.306336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.306384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.306515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.306549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.306681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.306736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.306883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.306938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.307070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.307124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.307256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.307295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.307441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.307476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.307606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.307640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.307850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.307888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.308090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.308127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.308277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.308313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.308440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.308475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.308609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.308644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.612 qpair failed and we were unable to recover it. 00:36:49.612 [2024-11-17 09:36:54.308792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.612 [2024-11-17 09:36:54.308843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.308989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.309027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.309282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.309422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.309466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.309597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.309647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.309815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.309850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.309977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.310015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.310139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.310191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.310336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.310391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.310543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.310592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.310827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.310868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.310998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.311037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.311161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.311199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.311375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.311426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.311542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.311579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.311708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.311744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.311859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.311892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.312058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.312113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.312243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.312296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.312453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.312487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.312614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.312671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.312888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.312957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.313171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.313229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.313381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.313417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.313528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.313564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.313754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.313827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.314001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.314060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.314233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.314271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.314409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.314445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.314574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.314612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.314771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.314808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.314945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.314982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.315164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.315219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.315374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.315442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.315566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.315603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.315755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.315795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.315962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.316014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.316188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.613 [2024-11-17 09:36:54.316226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.613 qpair failed and we were unable to recover it. 00:36:49.613 [2024-11-17 09:36:54.316357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.316397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.316539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.316573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.316748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.316803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.317131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.317310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.317345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.317472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.317507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.317639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.317681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.317814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.317849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.317998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.318052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.318232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.318285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.318426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.318467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.318582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.318616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.318782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.318816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.318970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.319040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.319196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.319236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.319381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.319416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.319554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.319589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.319727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.319762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.319920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.319958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.320108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.320149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.320306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.320343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.320538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.320588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.320749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.320805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.320989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.321043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.321183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.321218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.321331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.321385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.321542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.321591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.321753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.321810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.322025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.322065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.322209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.322243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.322362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.322404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.322552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.322587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.322708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.322762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.614 [2024-11-17 09:36:54.322899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.614 [2024-11-17 09:36:54.322936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.614 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.323088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.323127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.323265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.323331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.323519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.323557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.323827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.323900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.324093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.324159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.324271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.324306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.324440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.324476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.324629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.324841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.324881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.324999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.325038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.325211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.325249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.325484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.325672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.325711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.325925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.325978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.326126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.326178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.326313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.326356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.326512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.326571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.326733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.326935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.326974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.327144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.327182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.327316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.327357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.327507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.327542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.327657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.327691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.327819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.327854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.327989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.328024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.328240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.328275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.328400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.328436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.328587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.328635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.328792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.328830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.328976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.329011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.329187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.329223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.329411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.329559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.329609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.329765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.329821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.329981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.330033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.330159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.330208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.330361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.330406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.615 qpair failed and we were unable to recover it. 00:36:49.615 [2024-11-17 09:36:54.330571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.615 [2024-11-17 09:36:54.330619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.330800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.330835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.330987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.331039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.331178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.331212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.331325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.331377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.331528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.331577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.331734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.331771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.331875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.331911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.332061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.332097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.332203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.332239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.332418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.332467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.332642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.332679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.332788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.332826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.332936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.332971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.333080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.333116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.333272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.333436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.333473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.333581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.333616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.333745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.333781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.333915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.333955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.334094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.334129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.334266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.334300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.334435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.334470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.334633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.334675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.334814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.334848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.334987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.335021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.335159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.335194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.335361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.335403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.335541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.335576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.335743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.335792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.335913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.336113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.336147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.336285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.336320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.336470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.336505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.336619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.336666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.336802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.336837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.336974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.337008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.616 qpair failed and we were unable to recover it. 00:36:49.616 [2024-11-17 09:36:54.337115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.616 [2024-11-17 09:36:54.337150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.337294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.337331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.337481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.337530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.337657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.337693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.337838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.337873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.338041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.338075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.338216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.338250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.338384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.338419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.338527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.338563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.338754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.338802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.338973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.339009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.339265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.339314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.339442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.339479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.339636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.339686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.339856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.339892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.340027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.340061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.340175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.340208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.340330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.340395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.340522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.340560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.340702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.340738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.340862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.340896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.341038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.341084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.341197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.341238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.341355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.341401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.341535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.341585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.341742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.341920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.342128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.342163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.342309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.342345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.342492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.342527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.342635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.342669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.342807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.342841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.342982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.343016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.343146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.343320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.343357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.343550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.343599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.343733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.343782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.343905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.617 [2024-11-17 09:36:54.343942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.617 qpair failed and we were unable to recover it. 00:36:49.617 [2024-11-17 09:36:54.344051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.344085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.344224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.344258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.344386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.344422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.344609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.344659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.344811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.344849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.345014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.345049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.345147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.345181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.345341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.345381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.345507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.345557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.345733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.345769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.345932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.345967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.346104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.346140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.346303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.346338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.346503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.346553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.346683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.346732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.346878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.346915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.347050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.347085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.347247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.347285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.347475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.347525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.347665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.347701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.347855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.347904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.348031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.348066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.348249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.348283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.348432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.348470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.348611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.348658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.348877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.348942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.349073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.349149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.349333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.349374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.349515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.349549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.349699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.349734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.349894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.349951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.350171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.350232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.350449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.350485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.350707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.350760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.350940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.350992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.351154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.351188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.351329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.618 [2024-11-17 09:36:54.351365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.618 qpair failed and we were unable to recover it. 00:36:49.618 [2024-11-17 09:36:54.351490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.351526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.351680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.351730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.351875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.352052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.352088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.352228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.352264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.352408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.352444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.352549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.352584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.352750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.352785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.352943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.352977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.353087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.353121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.353291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.353327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.353487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.353537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.353682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.353731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.353848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.353884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.354011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.354055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.354202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.354248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.354410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.354446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.354601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.354669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.354853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.354894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.355051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.355103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.355271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.355311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.355499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.355549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.355714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.355768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.355955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.355994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.356239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.356273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.356381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.356417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.356575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.356608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.356713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.356764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.356900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.356938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.357225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.357375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.357426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.357556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.357589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.357723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.357757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.357892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.619 [2024-11-17 09:36:54.357926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.619 qpair failed and we were unable to recover it. 00:36:49.619 [2024-11-17 09:36:54.358096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.358134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.358298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.358333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.358456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.358491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.358655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.358822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.358873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.359006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.359056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.359204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.359255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.359393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.359427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.359556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.359605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.359800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.359841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.360008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.360047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.360204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.360242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.360418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.360467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.360611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.360661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.360814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.360851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.361053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.361089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.361257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.361409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.361444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.361558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.361594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.361705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.361739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.361867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.361906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.362008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.362042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.362177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.362210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.362358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.362415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.362533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.362568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.362712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.362752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.362937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.363009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.363266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.363301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.363439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.363476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.363580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.363615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.363759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.363793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.363992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.364062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.364223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.364275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.364412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.364447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.364587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.364622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.364759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.364793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.365026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.620 [2024-11-17 09:36:54.365147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.620 [2024-11-17 09:36:54.365186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.620 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.365341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.365382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.365525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.365560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.365673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.365708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.365847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.365890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.366027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.366062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.366195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.366229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.366383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.366432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.366563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.366601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.366721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.366758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.366952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.366991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.367136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.367173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.367320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.367376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.367545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.367579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.367695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.367733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.367877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.367915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.368177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.368213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.368360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.368405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.368564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.368613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.368757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.368797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.368999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.369037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.369155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.369192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.369360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.369433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.369576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.369617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.369759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.369793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.370003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.370064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.370243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.370281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.370439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.370597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.370632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.370750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.370788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.370959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.370996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.371192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.371245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.371398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.371435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.371570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.371605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.371817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.371873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.372006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.372044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.372228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.372266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.372386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.621 [2024-11-17 09:36:54.372437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.621 qpair failed and we were unable to recover it. 00:36:49.621 [2024-11-17 09:36:54.372573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.372607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.372786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.372824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.372993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.373197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.373265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.373462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.373501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.373725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.373942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.373995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.374144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.374197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.374358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.374419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.374538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.374591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.374751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.374785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.374917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.374952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.375070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.375104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.375207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.375241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.375361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.375417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.375553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.375594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.375756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.375796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.375911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.375950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.376155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.376210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.376340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.376396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.376563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.376598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.376708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.376743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.376989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.377049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.377222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.377258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.377420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.377455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.377701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.377883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.377917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.378029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.378064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.378200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.378234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.378407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.378442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.378578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.378613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.378752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.378786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.378893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.378927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.379069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.379106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.379258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.379308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.379440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.379490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.379629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-11-17 09:36:54.379671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-11-17 09:36:54.379902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.379957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.380217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.380286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.380432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.380466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.380587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.380622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.380799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.380868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.381090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.381152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.381277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.381328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.381505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.381540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.381699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.381737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.381969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.382009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.382180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.382216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.382393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.382442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.382599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.382654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.382816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.382874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.383021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.383058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.383227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.383266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.383406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.383473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.383620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.383675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.383921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.383974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.384123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.384159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.384295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.384331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.384488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.384523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.384659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.384711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.384864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.384902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.385030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.385082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.385205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.385245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.385384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.385452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.385572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.385609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.385791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.385836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.385997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.386031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.386279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.386317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.386458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.386494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-11-17 09:36:54.386635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-11-17 09:36:54.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.386853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.386912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.387072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.387129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.387275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.387313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.387503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.387553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.387716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.387765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.387941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.388001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.388204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.388238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.388389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.388424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.388536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.388570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.388689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.388727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.388839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.388892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.389066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.389123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.389257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.389312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.389511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.389560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.389718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.389761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.389874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.389913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.390136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.390199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.390354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.390405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.390540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.390574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.390762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.390829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.390996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.391034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.391186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.391226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.391431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.391480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.391666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.391716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.391956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.392041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.392166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.392204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.392390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.392425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.392559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.392592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.392735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.392769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.392933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.392966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.393081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.393119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.393276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.393324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.393460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.393510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.393627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.393663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.393820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.393858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.393973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.394016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.394170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-11-17 09:36:54.394207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-11-17 09:36:54.394362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.394405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.394526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.394563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.394720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.394755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.394935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.394973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.395096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.395134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.395275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.395327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.395456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.395491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.395623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.395676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.395872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.395926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.396130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.396171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.396311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.396350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.396527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.396562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.396691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.396741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.396901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.396962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.397101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.397136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.397304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.397338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.397472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.397511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.397625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.397661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.397791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.397824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.398005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.398062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.398268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.398305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.398470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.398504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.398631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.398698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.398845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.398887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.399151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.399191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.399321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.399359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.399525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.399560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.399711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.399749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.399888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.399926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.400097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.400135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.400277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.400315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.400516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.400553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.400696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.400730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.400887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.400925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.401117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.401151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.401340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.401388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.401509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.401543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-11-17 09:36:54.401716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-11-17 09:36:54.401750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.401902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.401962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.402146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.402184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.402326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.402379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.402559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.402597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.402708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.402746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.402895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.402933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.403128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.403195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.403353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.403418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.403583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.403622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.403828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.403883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.404041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.404100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.404220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.404254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.404374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.404424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.404589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.404638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.404837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.404892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.405066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.405129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.405274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.405313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.405490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.405525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.405657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.405695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.405841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.405879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.406065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.406230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.406264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.406442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.406490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.406649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.406698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.406929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.406983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.407184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.407223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.407384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.407419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.407543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.407580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.407689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.407724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.407832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.407875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.408017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.408052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.408247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.408285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.408445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.408480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.408579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.408613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.408779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.408813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.408947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.408999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.409174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-11-17 09:36:54.409211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-11-17 09:36:54.409346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.409388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.409540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.409590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.409721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.409762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.409932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.409993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.410228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.410289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.410440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.410476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.410632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.410681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.410845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.410904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.411045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.411080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.411226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.411261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.411437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.411506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.411693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.411742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.411887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.411923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.412059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.412094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.412234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.412270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.412437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.412473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.412641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.412695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.412918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.412971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.413156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.413209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.413319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.413354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.413516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.413571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.413717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.413770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.413964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.414000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.414245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.414294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.414432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.414468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.414602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.414637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.414794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.414832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.414986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.415026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.415177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.415216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.415356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.415399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.415577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.415626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.415780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.415820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.415962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.416001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.416191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.416228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.416396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.416458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.416601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.416635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.416768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.416819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-11-17 09:36:54.416966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-11-17 09:36:54.417004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.417190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.417230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.417407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.417457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.417593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.417641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.417813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.417849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.418121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.418179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.418361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.418407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.418546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.418582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.418755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.418791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.419052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.419118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.419325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.419363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.419533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.419578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.419748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.419783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.419916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.419950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.420106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.420168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.420319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.420354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.420521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.420571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.420786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.420840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.421039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.421080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.421226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.421276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.421429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.421464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.421582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.421616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.421750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.421785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.421922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.421958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.422170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.422238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.422394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.422430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.422582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.422631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.422800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.422853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.423037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.423076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.423217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.423268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.423423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.423473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.423645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.423682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.423808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.423848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.423980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.424020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-11-17 09:36:54.424219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-11-17 09:36:54.424284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.424457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.424493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.424640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.424674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.424837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.424871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.425123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.425186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.425309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.425347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.425508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.425542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.425688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.425726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.425922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.425975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.426131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.426184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.426327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.426363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.426558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.426606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.426796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.426872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.427084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.427141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.427309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.427344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.427486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.427520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.427644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.427678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.427864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.427902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.428056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.428094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.428248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.428285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.428468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.428518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.428702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.428751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.428888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.428928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.429114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.429152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.429295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.429329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.429480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.429515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.429682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.429735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.429909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.429946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.430094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.430132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.430309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.430348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.430530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.430579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.430758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.430812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.430972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.431011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.431164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.431201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.431364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.431412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.431550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.431585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.431713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-11-17 09:36:54.431747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-11-17 09:36:54.431901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.431941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.432137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.432175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.432338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.432416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.432568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.432607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.432733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.432770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.432953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.433149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.433185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.433332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.433381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.433533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.433580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.433716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.433754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.433894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.433929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.434034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.434068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.434206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.434241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.434463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.434500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.434657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.434706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.434849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.434912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.435141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.435201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.435346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.435388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.435537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.435586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.435833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.435896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.436027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.436080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.436252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.436290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.436469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.436518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.436695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.436731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.436896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.436933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.437178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.437229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.437386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.437421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.437550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.437584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.437727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.437763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.438000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.438110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.438276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.438313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.438479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.438514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.438653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.438688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.438841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.438879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.439046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.439096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.439298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.439336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.439496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.439530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-11-17 09:36:54.439694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-11-17 09:36:54.439730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.439897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.439949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.440130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.440168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.440316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.440353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.440513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.440675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.440725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.440866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.440922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.441120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.441181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.441327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.441366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.441539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.441589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.441743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.441797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.441964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.442022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.442272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.442306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.442444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.442478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.442636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.442689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.442856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.442908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.443145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.443209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.443346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.443411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.443553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.443591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.443761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.443798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.443970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.444007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.444148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.444185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.444364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.444421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.444603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.444652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.444822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.444862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.445002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.445059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.445286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.445323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.445489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.445524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.445642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.445696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.445869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.445906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.446054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.446091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.446227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.446264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.446455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.446505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.446626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.446663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.446816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.446916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.447065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.447105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.447269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-11-17 09:36:54.447303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-11-17 09:36:54.447409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.447444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.447603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.447642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.447864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.447898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.448150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.448210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.448339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.448382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.448516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.448565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.448738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.448776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.449016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.449056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.449211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.449250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.449396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.449448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.449590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.449627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.449814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.449871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.450127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.450197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.450387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.450422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.450574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.450623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.450743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.450799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.450981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.451042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.451193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.451232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.451397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.451433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.451572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.451606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.451738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.451772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.451905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.451945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.452084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.452139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.452248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.452283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.452430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.452639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.452679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.452796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.452835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.452976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.453014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.453197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.453256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.453477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.453531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.453675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.453713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.453874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.453925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.454087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.454122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.454335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.454379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.454546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.454599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.454761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-11-17 09:36:54.454826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-11-17 09:36:54.455006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.455075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.455210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.455245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.455383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.455433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.455593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.455643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.455791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.455827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.455965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.456000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.456128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.456163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.456276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.456311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.456457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.456492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.456650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.456700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.456817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.457058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.457113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.457242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.457281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.457412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.457462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.457577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.457613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.457816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.458042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.458113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.458248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.458287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.458454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.458490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.458683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.458759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.458881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.458932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.459078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.459143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.459300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.459338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.459506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.459540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.459675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.459710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.459861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.459898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.460058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.460096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.460258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.460295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-11-17 09:36:54.460452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-11-17 09:36:54.460487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.460618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.460668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.460851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.460888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.461028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.461066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.461222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.461260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.461445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.461480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.461608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.461643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.461792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.461830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.461965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.462003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.462155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.462192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.462305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.462343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.462528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.462563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.462723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.462760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.462903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.462941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.463087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.463125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.463304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.463342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.463484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.463533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.463651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.463688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.463850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.463904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.464050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.464104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.464242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.464277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.464410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.464549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.464583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.464765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.464802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.465000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.465075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.465238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.465277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.465396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.465450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.465608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.465641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.465876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.465913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.466027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.466064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.466212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.466249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.466383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.466434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.466584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.466634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.466861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-11-17 09:36:54.466917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-11-17 09:36:54.467105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.467159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.467294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.467329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.467485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.467520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.467634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.467832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.467866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.467991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.468028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.468204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.468241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.468388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.468440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.468609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.468720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.468776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.468973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.469030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.469211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.469276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.469402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.469455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.469619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.469653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.469788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.469827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.470035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.470093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.470222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.470261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.470395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.470447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.470607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.470667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.470772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.470808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.470967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.471021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.471148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.471182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.471313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.471348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.471531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.471584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.471748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.471788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.471964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.472002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-11-17 09:36:54.472178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-11-17 09:36:54.472234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.472346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.472403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.472582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.472620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.472730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.472767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.472917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.472962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.473157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.473213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.473358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.473402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.473557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.473611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.473760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.473798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.474030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.474085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.474228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.474263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.474479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.474514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.474646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.474681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.474822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.474857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.475028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.475063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.475201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.475236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.475348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.475392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.475556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.475591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.475711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.475760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.475878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.475915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.476084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.476119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.476248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.476283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.476419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.476454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.476565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.476617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.476762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.476802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.476957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.476996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-11-17 09:36:54.477104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-11-17 09:36:54.477142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.477263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.477297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.477480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.477518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.477687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.477726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.477907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.477959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.478094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.478129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.478270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.478306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.478434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.478470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.478583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.478618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.478763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.478801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.478938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.478976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.479127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.479165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.479282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.479334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.479487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.479522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.479703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.479741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.479884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.479922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.480065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.480103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.480268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.480303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.480456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.480498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.480643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.480698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.480877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.480918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.481041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.481186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.481224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.481411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.481446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.481587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.481623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.481786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.481824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.481987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-11-17 09:36:54.482021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-11-17 09:36:54.482223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.482261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.482395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.482430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.482577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.482612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.482850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.482907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.483110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.483147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.483294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.483332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.483504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.483538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.483685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.483723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.483852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.483886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.484035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.484069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.484243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.484296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.484402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.484436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.484547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.484581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.484706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.484755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.484970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.485011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.485237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.485276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.485445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.485616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.485905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.485939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.486074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.486112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.486310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.486347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.486520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.486556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.486680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.486718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.486978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.487047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.487280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.487313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.487481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.487516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-11-17 09:36:54.487633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-11-17 09:36:54.487686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.487857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.487894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.488029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.488081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.488233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.488271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.488436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.488470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.488587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.488626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.488784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.488818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.488924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.488958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.489151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.489203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.489355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.489416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.489556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.489590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.489726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.489900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.489934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.490093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.490130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.490331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.490378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.490506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.490540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.490646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.490680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.490844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.490878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.491071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.491108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.491258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.491295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.491443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.491481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.491641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.491691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-11-17 09:36:54.491868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-11-17 09:36:54.491937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.492114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.492169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.492312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.492347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.492464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.492499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.492644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.492696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.492837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.492875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.492978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.493015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.493126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.493163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.493365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.493405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.493565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.493599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.493766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.493803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.493943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.493981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.494118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.494155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.494318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.494459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.494493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.494617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.494655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.494806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.494843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.494982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.495020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.495171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.495379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.495413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.495546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.495580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.495736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.495784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.495955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.495996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.496177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.496221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.496380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.496435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.496582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.496617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.640 qpair failed and we were unable to recover it. 00:36:49.640 [2024-11-17 09:36:54.496768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.640 [2024-11-17 09:36:54.496806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.496975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.497012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.497135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.497173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.497310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.497346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.497514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.497548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.497698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.497747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.497942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.497997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.498157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.498210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.498380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.498416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.498632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.498684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.498837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.498876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.499118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.499319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.499354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.499533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.499585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.499729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.499780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.499924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.499959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.500095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.500129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.500234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.500268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.500405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.500441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.500578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.500614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.500778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.500812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.500946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.500980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.501081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.501115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.501252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.501286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.501433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.501468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.501576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.501610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.641 [2024-11-17 09:36:54.501769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.641 [2024-11-17 09:36:54.501808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.641 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.501940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.501992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.502111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.502149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.502281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.502334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.502538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.502574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.502683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.502734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.502845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.502883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.503030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.503068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.503197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.503235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.503374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.503409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.503555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.503590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.503775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.503819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.503961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.503999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.504142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.504194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.504340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.504387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.504550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.504586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.504719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.504769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.504910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.504953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.505111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.505151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.505267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.505305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.505467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.505503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.505684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.505752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.505944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.505999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.506160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.506236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.506432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.506467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.506611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.506667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.642 [2024-11-17 09:36:54.506838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.642 [2024-11-17 09:36:54.506875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.642 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.507032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.507092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.507205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.507242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.507388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.507442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.507582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.507616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.507806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.507844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.507982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.508019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.508168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.508206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.508319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.508357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.508540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.508589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.508808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.509007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.509048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.509125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:36:49.643 [2024-11-17 09:36:54.509306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.509479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.509514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.509666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.509734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.509887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.509940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.510085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.510124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.510270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.510307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.510487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.510537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.510709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.510767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.510934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.510988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.511130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.511165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.511322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.511357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.511500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.511534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.511699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.511735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.511880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.643 [2024-11-17 09:36:54.511915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.643 qpair failed and we were unable to recover it. 00:36:49.643 [2024-11-17 09:36:54.512095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.512144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.512282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.512319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.512464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.512514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.512645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.512683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.512915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.512975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.513182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.513221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.513381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.513434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.513574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.513615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.513816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.513867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.514022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.514074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.514213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.514247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.514391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.514427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.514574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.514633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.514859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.514920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.515150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.515187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.515324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.515361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.515548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.515587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.515733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.515770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.515881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.515919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.516064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.516103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.516245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.516311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.516444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.516493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.516650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.516687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.516907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.516958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.517179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.517232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.517397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.644 [2024-11-17 09:36:54.517442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.644 qpair failed and we were unable to recover it. 00:36:49.644 [2024-11-17 09:36:54.517607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.517660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.517818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.517874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.518039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.518092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.518230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.518266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.518403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.518438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.518616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.518666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.518786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.518822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.518956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.518991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.519110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.519145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.519283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.519317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.519487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.519522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.519644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.519700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.519843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.519878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.520025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.520061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.520204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.520240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.520379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.520422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.520532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.520566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.520744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.520797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.520912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.645 [2024-11-17 09:36:54.520950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.645 qpair failed and we were unable to recover it. 00:36:49.645 [2024-11-17 09:36:54.521187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.521266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.521482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.521523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.521722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.521777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.521898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.521939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.522161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.522201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.522401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.522611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.522666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.522923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.522996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.523244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.523297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.523414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.523449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.523636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.523691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.523906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.523958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.524141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.524176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.524335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.524376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.524534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.524572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.524744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.524796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.525018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.525081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.525322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.525361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.525545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.525583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.525709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.525746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.526021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.526097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.526353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.526408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.526546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.526581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.526716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.526769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.526982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.527038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.527279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.527339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.527488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.646 [2024-11-17 09:36:54.527523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.646 qpair failed and we were unable to recover it. 00:36:49.646 [2024-11-17 09:36:54.527698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.527736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.527955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.527993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.528245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.528304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.528500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.528550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.528674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.528710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.528916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.528976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.529189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.529227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.529404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.529439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.529565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.529614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.529857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.529928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.530080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.530134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.530299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.530338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.530566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.530753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.530810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.531017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.531053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.531157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.531192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.531353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.531423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.531600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.531654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.531883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.531944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.532158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.532220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.532382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.532423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.532534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.532568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.532755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.532809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.533112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.533172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.533358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.533408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.533568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.533603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.533763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.647 qpair failed and we were unable to recover it. 00:36:49.647 [2024-11-17 09:36:54.533938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.647 [2024-11-17 09:36:54.533991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.534186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.534250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.534386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.534421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.534563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.534598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.534758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.534793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.534949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.534987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.535114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.535154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.535354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.535396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.535547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.535596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.535887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.535960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.536230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.536296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.536455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.536491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.536623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.536658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.536806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.536845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.536995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.537032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.537232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.537270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.537444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.537493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.537603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.537640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.537799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.537846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.538118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.538179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.538312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.538352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.538563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.538613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.538891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.538951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.539206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.539263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.539410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.539462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.539576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.539612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.539790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.539827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.648 qpair failed and we were unable to recover it. 00:36:49.648 [2024-11-17 09:36:54.539975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.648 [2024-11-17 09:36:54.540026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.540236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.540287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.540429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.540464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.540598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.540633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.540869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.540929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.541143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.541201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.541363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.541427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.541593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.541642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.541873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.541928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.542086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.542139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.542286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.542321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.542449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.542485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.542653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.542688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.542795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.542829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.543048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.543085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.543212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.543245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.543348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.543391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.543530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.543565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.543722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.543772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.543944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.543980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.544134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.544169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.544338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.544389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.544571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.544608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.544792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.544829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.545014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.545067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.545194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.545229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.545362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.545403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.545536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.545589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.545779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.545830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.649 [2024-11-17 09:36:54.546012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.649 [2024-11-17 09:36:54.546066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.649 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.546198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.546250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.546388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.546422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.546581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.546620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.546832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.546893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.547100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.547160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.547286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.547321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.547563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.547598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.547820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.547872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.548031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.548083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.548191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.548231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.548413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.548452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.548671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.548720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.548910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.548964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.549244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.549301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.549453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.549488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.549642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.549681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.549792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.549837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.550117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.550190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.550362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.550403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.550583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.550638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.550811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.550863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.551020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.551073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.551231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.551265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.551390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.551606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.551654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.551775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.551811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.551970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.552014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.552157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.552190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.552295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.650 [2024-11-17 09:36:54.552329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.650 qpair failed and we were unable to recover it. 00:36:49.650 [2024-11-17 09:36:54.552509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.552544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.552707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.552762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.552913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.552963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.553131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.553166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.553296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.553345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.553500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.553536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.553696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.553740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.553899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.553933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.554094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.554128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.554236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.554271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.554412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.554447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.554577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.554614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.554773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.554843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.555029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.555080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.555193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.555227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.555384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.555419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.555557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.555590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.555697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.555731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.555867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.555902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.556036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.556069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.556204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.556238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.556437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.556478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.556647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.556712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.556996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.557036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.557191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.557226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.557336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.557383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.557502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.557536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.651 [2024-11-17 09:36:54.557690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.651 [2024-11-17 09:36:54.557733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.651 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.557860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.557898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.558066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.558104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.558273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.558309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.558496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.558551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.558723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.558764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.558914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.558953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.559069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.559109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.559276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.559310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.559474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.559510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.559670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.559725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.559844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.560008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.560045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.560185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.560229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.560395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.560430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.560574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.560610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.560778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.560817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.560994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.561031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.561186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.561220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.561345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.561387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.561541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.561579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.561793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.561909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.561948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.562106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.562151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.562317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.562372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.562495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.562532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.562695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.562752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.562926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.562978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.563174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.563213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.563527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.563561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.563703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.563751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.563920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.563958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.564096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.564142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.564281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.564318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.652 [2024-11-17 09:36:54.564461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.652 [2024-11-17 09:36:54.564495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.652 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.564638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.564676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.564872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.564933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.565140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.565206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.565372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.565425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.565552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.565733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.565776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.565912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.565966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.566147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.566335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.566394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.566595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.566632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.566858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.566896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.567131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.567169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.567420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.567469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.567637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.567694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.567826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.567861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.568045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.568079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.568204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.568239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.568388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.568424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.568566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.568600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.568756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.568789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.568978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.569015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.569163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.569200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.569377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.569411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.569537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.569575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.569754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.569808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.570016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.570056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.570296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.570333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.570489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.570525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.570636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.570695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.570858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.570891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.571027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.571064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.571238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.571275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.571420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.571457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.571623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.571659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.571840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.571880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.572002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.572166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.572199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.572362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.572403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.572506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.572558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.572705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.572880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.572914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.573083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.573122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.653 [2024-11-17 09:36:54.573281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.653 [2024-11-17 09:36:54.573315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.653 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.573438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.573472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.573576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.573610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.573801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.573853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.574003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.574040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.574188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.574227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.574409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.574458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.574646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.574706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.574819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.574854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.575034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.575087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.575259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.575293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.575413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.575448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.575578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.575626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.575827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.575883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.576121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.576177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.576311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.576345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.576536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.576702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.576759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.576948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.577079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.577124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.577283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.577328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.577459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.577494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.577621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.577655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.577781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.577815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.577977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.578012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.578147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.578181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.578339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.578382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.578557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.578591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.578735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.578768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.578910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.578943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.579057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.579093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.579221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.579256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.579439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.579474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.579590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.579624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.579802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.580009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.580046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.580227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.580260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.580446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.580481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.580616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.580669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.580899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.580936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.581076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.581113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.581287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.581324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.581472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.581506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.581706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.581772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.654 [2024-11-17 09:36:54.582036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.654 [2024-11-17 09:36:54.582089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.654 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.582275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.582332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.582482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.582516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.582623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.582676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.582809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.582860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.582972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.583010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.583172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.583210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.583334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.583376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.583515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.583549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.583692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.583735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.583927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.583965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.584165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.584204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.584353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.584414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.584556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.584589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.584726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.584777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.585006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.585042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.585159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.585196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.585344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.585400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.585528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.585561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.585699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.585751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.585892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.585938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.586121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.586159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.586322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.586356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.586478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.586511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.586619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.586671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.586844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.586882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.587043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.587080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.587257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.587294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.587506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.587555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.587723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.587759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.587920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.587958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.588114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.588152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.588284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.588321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.588476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.588509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.588628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.588688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.588817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.588857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.588973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.589008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.589120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.589154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.589295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.589328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.589464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.589503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.589637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.589693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.589858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.589920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.590090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.590149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.590285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.590319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.655 [2024-11-17 09:36:54.590469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.655 [2024-11-17 09:36:54.590507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.655 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.590631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.590668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.590789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.590826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.591005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.591042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.591183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.591220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.591389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.591423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.591575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.591623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.591774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.591813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.591935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.591989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.656 [2024-11-17 09:36:54.592134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.656 [2024-11-17 09:36:54.592171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.656 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.592359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.592432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.592573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.592618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.592743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.592796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.592923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.592961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.593119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.593158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.593309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.593348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.593526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.593560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.593721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.593758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.593885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.593922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.594073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.594109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.594226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.594263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.594380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.594433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.594584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.594630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.594806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.594885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.595059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.595115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.595269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.595304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.595465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.595501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.595629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.595680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.595829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.595867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.596000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.596195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.596232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.596390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.596442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.596577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.596610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.596802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.596851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.597002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.597039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.597180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.597222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.597382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.597417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.597594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.597643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.597786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.597844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.597981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.598036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.598219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.598257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.598448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.598482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-11-17 09:36:54.598637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-11-17 09:36:54.598674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.598826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.599039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.599076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.599225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.599262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.599379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.599429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.599569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.599602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.599763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.599800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.599950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.599989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.600144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.600181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.600333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.600372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.600496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.600533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.600675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.600712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.600902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.600950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.601082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.601121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.601271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.601308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.601499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.601534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.601671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.601706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.601850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.601898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.602074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.602114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.602228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.602280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.602452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.602496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.602622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.602657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.602829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.602867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.602992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.603029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.603176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.603226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.603332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.603372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.603513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.603545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.603699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.603735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.603937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.603974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.604126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.604174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.604317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.604350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.604497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.604547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.604715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.604771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.604954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.605013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.605212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.605246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.605403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.605452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.605586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.605626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.605824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-11-17 09:36:54.605863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-11-17 09:36:54.606004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.606042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.606204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.606241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.606427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.606463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.606597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.606635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.606821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.606902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.607065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.607123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.607318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.607352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.607473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.607506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.607616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.607653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.607853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.607907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.608088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.608149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.608298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.608332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.608529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.608568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.608742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.608779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.608929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.608972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.609131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.609165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.609275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.609308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.609431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.609465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.609575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.609609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.609745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.609779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.609930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.609963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.610149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.610203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.610350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.610402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.610528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.610577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.610758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.610794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.610994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.611031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.611210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.611248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.611423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.611458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.611607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.611645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.611904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.611982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.612141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.612240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.612416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.612601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.612657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.612849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.612899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.613006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.613041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.613159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.613194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.613337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.613377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-11-17 09:36:54.613540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-11-17 09:36:54.613573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.613708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.613742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.613839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.613872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.613984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.614016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.614175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.614209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.614313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.614347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.614523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.614560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.614730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.614785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.614933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.614973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.615148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.615205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.615343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.615385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.615515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.615552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.615703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.615740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.615905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.615942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.616101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.616163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.616335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.616381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.616530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.616563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.616682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.616745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.616886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.616925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.617055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.617106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.617250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.617288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.617449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.617483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.617587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.617639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.617816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.617853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.617962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.617999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.618146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.618188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.618335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.618380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.618498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.618532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.618690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.618730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.618979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.619036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.619148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.619185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.619385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.619434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.619611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.619648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.619785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.619821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.619970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.620021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.620173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.620207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.620344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.620397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.620512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-11-17 09:36:54.620546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-11-17 09:36:54.620712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.620752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.620906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.620940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.621073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.621108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.621238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.621273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.621435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.621486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.621665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.621713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.621887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.621923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.622060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.622138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.622288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.622322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.622464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.622500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.622660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.622698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.622910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.622966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.623274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.623320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.623514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.623549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.623705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.623742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.623951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.624128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.624165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.624267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.624321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.624464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.624504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.624630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.624704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.624867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.624908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.625113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.625163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.625288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.625338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.625505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.625539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.625674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.625708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.625865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.625902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.626059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.626097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.626239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.626283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.626434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.626469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.626607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.626658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-11-17 09:36:54.626837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-11-17 09:36:54.626878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.627051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.627088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.627214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.627253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.627409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.627443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.627571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.627605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.627725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.627763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.627968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.628005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.628156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.628194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.628340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.628393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.628573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.628621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.628791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.628846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.629108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.629163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.629311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.629357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.629544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.629598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.629733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.629773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.629928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.629965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.630084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.630122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.630316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.630364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.630504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.630538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.630728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.630765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.630881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.630917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.631084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.631121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.631254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.631291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.631432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.631466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.631627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.631683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.631933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.632099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.632149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.632390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.632424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.632582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.632636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.632770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.632823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.632979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.633033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.633191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.633226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.633387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.633422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.633542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.633609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.633770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.633809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.633961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.633999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.634178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.634216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.634452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-11-17 09:36:54.634498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-11-17 09:36:54.634681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.634723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.634875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.634911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.635063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.635097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.635252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.635302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.635476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.635525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.635686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.635725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.635881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.635919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.636038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.636074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.636208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.636245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.636448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.636593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.636630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.636815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.636859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.637005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.637047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.637193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.637243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.637439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.637488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.637611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.637647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.637812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.637868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.638004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.638067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.638175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.638209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.638383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.638519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.638654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.638693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.638839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.638874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.639010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.639045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.639151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.639184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.639306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.639365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.639555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.639592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.639703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.639738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.639889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.639925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.640094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.640157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.640287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.640326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.640496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.640530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.640696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.640740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.640893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.640930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.641078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.641116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.641348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.641489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.641536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-11-17 09:36:54.641646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-11-17 09:36:54.641700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.641879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.641917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.642119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.642185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.642409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.642458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.642596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.642632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.642844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.642908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.643042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.643079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.643238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.643273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.643425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.643465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.643617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.643651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.643833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.643867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.643999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.644033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.644181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.644216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.644353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.644398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.644529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.644578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.644745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.644781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.644918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.644953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.645111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.645144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.645275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.645308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.645490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.645545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.645714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.645755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.645969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.646033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.646186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.646261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.646424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.646461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.646615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.646688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.646951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.647022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.647254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.647290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.647438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.647474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.647580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.647624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.647801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.647876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.648135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.648192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.648330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.648365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.648513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.648547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.648682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.648878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.648940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.649117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.649153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.649316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.649350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.649472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.649507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-11-17 09:36:54.649622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-11-17 09:36:54.649675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.649835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.649871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.650122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.650161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.650282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.650319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.650479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.650695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.650776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.650994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.651051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.651189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.651222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.651436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.651471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.651577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.651610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.651735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.651768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.651925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.651963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.652113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.652150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.652271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.652307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.652502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.652550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.652738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.652792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.652954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.653011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.653203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.653254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.653436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.653471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.653598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.653651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.653905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.653966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.654174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.654232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.654411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.654444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.654572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.654609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.654772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.654824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.654978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.655030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.655173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.655210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.655408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.655442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.655557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.655590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.655734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.655771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.655990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.656027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.656181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.656218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.656393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.656427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.656561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.656594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.656782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.656819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.656963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.657015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.657171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.657214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.657337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-11-17 09:36:54.657397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-11-17 09:36:54.657526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.657559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.657694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.657750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.657923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.657961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.658104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.658148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.658283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.658321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.658494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.658528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.658678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.658732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.658843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.658879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.659028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.659081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.659214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.659249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.659409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.659458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.659644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.659704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.659890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.659925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.660028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.660061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.660181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.660214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.660355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.660399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.660554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.660591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.660747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.660784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.660988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.661048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.661178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.661215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.661339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.661380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.661550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.661604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.661743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.661787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.661925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.661959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.662169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.662203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.662376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.662412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.662520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.662554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.662680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.662728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.662879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.662915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.663022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.663076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.663223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.663261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.663399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.663436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-11-17 09:36:54.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-11-17 09:36:54.663693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.663877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.663934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.664157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.664213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.664345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.664390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.664526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.664560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.664723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.664809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.665061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.665117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.665293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.665330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.665496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.665530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.665648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.665695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.665846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.665900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.666022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.666061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.666175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.666213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.666373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.666407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.666542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.666580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.666772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.666875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.667179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.667241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.667377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.667428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.667547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.667582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.667740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.667789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.668049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.668120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.668297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.668332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.668460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.668494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.668622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.668655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.668802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.668835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.669047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.669103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.669312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.669361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.669502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.669539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.669767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.669830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.670054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.670112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.670282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.670320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.670481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.670529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.670686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.670752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.671012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.671051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.671330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.671388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.671564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.671599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.671758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.671828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.672038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-11-17 09:36:54.672095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-11-17 09:36:54.672246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.672294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.672425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.672461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.672585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.672633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.672869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.672905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.673089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.673145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.673291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.673326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.673503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.673555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.673730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.673784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.673980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.674040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.674258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.674291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.674436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.674470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.674632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.674686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.674812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.674848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.675047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.675144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.675326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.675363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.675529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.675562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.675723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.675793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.676000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.676058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.676195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.676230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.676373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.676408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.676535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.676573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.676784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.676834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.677036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.677098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.677238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.677271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.677488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.677540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.677698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.677749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.677933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.677986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.678119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.678153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.678292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.678328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.678491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.678525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.678635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.678675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.678818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.678855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.678967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.679005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.679143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.679217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.679408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.679445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.679588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.679640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.679802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.679842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-11-17 09:36:54.679960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-11-17 09:36:54.679998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.680129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.680163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.680329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.680363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.680503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.680538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.680704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.680747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.680854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.680891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.681020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.681055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.681269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.681303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.681435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.681489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.681715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.681770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.682008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.682063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.682199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.682232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.682378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.682413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.682591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.682644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.682805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.682853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.683011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.683090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.683225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.683258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.683373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.683407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.683544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.683577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.683692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.683742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.683899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.683951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.684124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.684178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.684304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.684344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.684508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.684575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.684743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.684783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.684931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.684969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.685081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.685119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.685274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.685309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.685423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.685457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.685582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.685622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.685740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.685777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.685926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.685964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.686113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.686307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.686519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.686714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.686766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.686969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.687007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.687221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.687280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-11-17 09:36:54.687460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-11-17 09:36:54.687495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.687635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.687673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.687833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.687871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.688016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.688054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.688165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.688205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.688372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.688408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.688570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.688603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.688719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.688755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.688940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.688978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.689158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.689196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.689375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.689428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.689554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.689592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.689777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.689816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.689935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.689973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.690151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.690189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.690320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.690362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.690496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.690531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.690691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.690743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.690895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.690947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.691100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.691150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.691280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.691314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.691520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.691573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.691717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.691765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.691909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.691945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.692104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.692137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.692240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.692274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.692387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.692423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.692564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.692597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.692705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.692756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.692930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.692967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.693136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.693172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.693288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.693325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.693516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.693564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.693781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.693848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.693984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-11-17 09:36:54.694038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-11-17 09:36:54.694204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.694242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.694438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.694472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.694593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.694641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.694797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.694853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.695021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.695075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.695221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.695259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.695410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.695444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.695575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.695608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.695806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.695843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.696039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.696076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.696251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.696289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.696460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.696494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.696728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.696765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.697059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.697135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.697327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.697374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.697550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.697599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.697804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.697859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.698076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.698111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.698257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.698292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.698454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.698502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.698659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.698706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.698854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.698890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.699051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.699090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.699239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.699277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.699463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.699600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.699635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.699762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.699803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.699984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.700037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.700169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.700203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.700325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.700381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.700564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.700612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.700758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.700902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.700936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.701069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.701103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.701234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.701267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.701383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-11-17 09:36:54.701418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-11-17 09:36:54.701574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.701622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.701838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.701895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.702129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.702167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.702340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.702388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.702538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.702571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.702727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.702764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.703024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.703061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.703235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.703287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.703436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.703472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.703574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.703608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.703756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.704064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.704121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.704306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.704344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.704473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.704517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.704676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.704713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.704883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.704956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.705096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.705133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.705283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.705322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.705565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.705722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.705790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.706009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.706069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.706225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.706265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.706426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.706462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.706595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.706629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.706766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.706820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.707112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.707170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.707310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.707345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.707577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.707611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.707822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.707875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.708132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.708199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.708306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.708345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.708513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.708561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.708761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.708814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.708988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.709046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.709281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.709341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.709456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.709491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.709626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.709661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.709887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-11-17 09:36:54.709924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-11-17 09:36:54.710123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.710181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.710331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.710381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.710556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.710593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.710738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.711033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.711087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.711265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.711314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.711510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.711547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.711648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.711792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.711846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.711982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.712015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.712177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.712212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.712340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.712379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.712485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.712518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.712674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.712712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.712855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.712894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.713076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.713113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.713281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.713316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.713462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.713497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.713651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.713701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.713945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.714006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.714171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.714206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.714357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.714416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.714560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.714596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.714772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.714806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.714983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.715080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.715268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.715303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.715412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.715447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.715591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.715625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.715731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.715928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.715962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.716183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.716243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.716408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.716442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.716556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.716590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.716725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.716778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.717028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.717101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.717298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.717334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.717454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.717490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.953 qpair failed and we were unable to recover it. 00:36:49.953 [2024-11-17 09:36:54.717628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.953 [2024-11-17 09:36:54.717662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.717856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.717891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.718020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.718065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.718219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.718257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.718421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.718456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.718621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.718655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.718786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.718840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.719024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.719076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.719276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.719448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.719483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.719645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.719698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.719824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.719876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.720028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.720068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.720194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.720231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.720365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.720408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.720568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.720601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.720747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.720785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.721005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.721043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.721188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.721225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.721343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.721391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.721543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.721578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.721740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.721773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.721904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.721944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.722087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.722161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.722309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.722346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.722585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.722638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.722784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.722836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.722989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.723054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.723201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.723235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.723337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.723382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.723549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.723597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.723794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.723834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.723982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.724020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.724170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.724208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.724380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.724414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.724550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.724584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.724745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.724797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.724952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.954 [2024-11-17 09:36:54.725004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.954 qpair failed and we were unable to recover it. 00:36:49.954 [2024-11-17 09:36:54.725142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.725177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.725339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.725378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.725559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.725611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.725745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.725778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.725943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.725977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.726074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.726248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.726282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.726438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.726478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.726653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.726690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.726806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.726843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.726983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.727020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.727161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.727215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.727388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.727453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.727622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.727663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.727781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.727821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.727938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.727976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.728165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.728218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.728325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.728360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.728513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.728548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.728661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.728694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.728873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.728910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.729179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.729299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.729339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.729512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.729548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.729726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.729786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.729940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.730016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.730174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.730231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.730394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.730428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.730554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.730602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.730770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.730809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.731007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.731045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.731199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.731236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.731388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.955 [2024-11-17 09:36:54.731439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.955 qpair failed and we were unable to recover it. 00:36:49.955 [2024-11-17 09:36:54.731568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.731602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.731706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.731758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.731935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.731972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.732137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.732175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.732333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.732499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.732547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.732703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.732751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.732869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.732905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.733103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.733171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.733343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.733385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.733528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.733740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.733794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.734025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.734082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.734214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.734265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.734403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.734438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.734558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.734606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.734728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.734765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.734971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.735042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.735204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.735242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.735408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.735456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.735609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.735656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.735818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.735885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.736090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.736149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.736316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.736457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.736491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.736632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.736666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.736873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.736943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.737085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.737123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.737275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.737313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.737532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.737580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.737744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.737797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.737961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.738018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.738181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.738235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.738387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.738436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.738623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.738663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.738811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.738849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.738997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.739035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.956 [2024-11-17 09:36:54.739195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.956 [2024-11-17 09:36:54.739279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.956 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.739480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.739528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.739693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.739745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.739925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.739980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.740117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.740169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.740303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.740338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.740501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.740541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.740678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.740731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.741054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.741114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.741296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.741477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.741511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.741665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.741702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.741849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.741886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.742082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.742144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.742253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.742304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.742449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.742485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.742600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.742633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.742767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.742819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.742971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.743022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.743292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.743340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.743473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.743509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.743673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.743712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.743849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.743902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.744169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.744227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.744382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.744433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.744567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.744600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.744802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.744867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.745094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.745152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.745309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.745343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.745500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.745549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.745722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.745776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.745917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.745972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.746197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.746259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.746384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.746435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.746568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.746623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.746807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.746842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.747067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.747127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.747275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.747312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.957 qpair failed and we were unable to recover it. 00:36:49.957 [2024-11-17 09:36:54.747477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.957 [2024-11-17 09:36:54.747511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.747651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.747689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.747794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.747848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.747960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.747998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.748173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.748210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.748386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.748452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.748571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.748610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.748777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.748829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.748983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.749035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.749173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.749207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.749416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.749567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.749615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.749756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.749792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.750029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.750086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.750199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.750239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.750441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.750604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.750640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.750827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.750880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.751006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.751058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.751184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.751222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.751352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.751394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.751504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.751537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.751662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.751699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.751833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.751885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.752019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.752073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.752238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.752279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.752432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.752466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.752632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.752738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.752772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.752955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.753021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.753265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.753304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.753464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.753498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.753626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.753678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.753801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.753850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.754069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.754130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.754240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.754278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.754404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.754442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.754621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.958 [2024-11-17 09:36:54.754689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.958 qpair failed and we were unable to recover it. 00:36:49.958 [2024-11-17 09:36:54.754852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.754892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.755072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.755110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.755251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.755289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.755402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.755453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.755587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.755620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.755794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.755831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.755977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.756017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.756227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.756265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.756400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.756435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.756545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.756579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.756711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.756745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.756879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.756932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.757127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.757179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.757301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.757334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.757448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.757482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.757590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.757623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.757759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.757792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.757893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.757927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.758086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.758125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.758330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.758376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.758528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.758583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.758724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.758780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.758984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.759044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.759195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.759233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.759355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.759416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.759623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.759733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.759769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.759973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.760160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.760226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.760364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.760416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.760528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.760563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.760701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.760736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.760899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.760932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.761070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.761106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.761215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.761250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.761429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.761478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.761619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.761653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.959 [2024-11-17 09:36:54.761784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.959 [2024-11-17 09:36:54.761817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.959 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.761975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.762140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.762178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.762319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.762357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.762557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.762710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.762747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.762885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.762922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.763092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.763129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.763274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.763310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.763495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.763544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.763759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.763824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.763981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.764021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.764221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.764274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.764387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.764423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.764578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.764630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.764841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.764909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.765014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.765048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.765207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.765241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.765350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.765391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.765570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.765624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.765781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.765832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.766102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.766163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.766306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.766486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.766522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.766651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.766688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.766826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.766864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.767034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.767072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.767220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.767254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.767414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.767462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.767624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.767689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.767875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.767915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.768060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.768098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.768234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.768272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.960 [2024-11-17 09:36:54.768426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.960 [2024-11-17 09:36:54.768460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.960 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.768628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.768663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.768804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.768840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.769023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.769090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.769287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.769324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.769514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.769547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.769705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.769744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.769880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.769977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.770138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.770196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.770337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.770385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.770535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.770580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.770730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.770767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.770972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.771028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.771202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.771239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.771356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.771420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.771524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.771558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.771714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.771751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.771865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.771902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.772059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.772097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.772270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.772318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.772463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.772500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.772677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.772730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.772914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.772984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.773211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.773249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.773409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.773443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.773581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.773615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.773783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.773817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.774033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.774070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.774198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.774252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.774422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.774459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.774590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.774625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.774750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.774803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.774939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.774977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.775166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.775232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.775418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.775472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.775619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.775672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.775875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.775940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.776199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.961 [2024-11-17 09:36:54.776257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.961 qpair failed and we were unable to recover it. 00:36:49.961 [2024-11-17 09:36:54.776418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.776453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.776584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.776618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.776757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.776808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.776961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.777003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.777165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.777221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.777393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.777428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.777586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.777621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.777752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.777786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.777919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.777958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.778110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.778147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.778274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.778318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.778507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.778555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.778671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.778708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.778914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.778949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.779052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.779086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.779230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.779265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.779412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.779461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.779569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.779604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.779755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.779792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.779930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.779978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.780195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.780229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.780387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.780422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.780599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.780635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.780768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.780802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.780967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.781005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.781122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.781161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.781318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.781352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.781472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.781505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.781665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.781699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.781832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.781866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.782039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.782076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.782188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.782225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.782389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.782455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.782594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.782642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.782765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.782803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.783062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.783123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.783258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.783291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.783455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.962 [2024-11-17 09:36:54.783495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.962 qpair failed and we were unable to recover it. 00:36:49.962 [2024-11-17 09:36:54.783661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.783725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.783865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.783918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.784115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.784185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.784317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.784352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.784525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.784563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.784686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.784726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.785026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.785084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.785220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.785255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.785404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.785438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.785540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.785574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.785745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.785799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.786025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.786083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.786231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.786274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.786440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.786477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.786592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.786626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.786950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.787010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.787171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.787229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.787354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.787416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.787554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.787587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.787694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.787745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.787854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.787890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.788065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.788102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.788212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.788249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.788373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.788426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.788611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.788649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.788774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.788811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.789004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.789041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.789164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.789200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.789362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.789402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.789507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.789541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.789769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.789835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.789996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.790052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.790222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.790276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.790418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.790453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.790599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.790651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.790786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.790820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.791036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.791104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.963 qpair failed and we were unable to recover it. 00:36:49.963 [2024-11-17 09:36:54.791257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.963 [2024-11-17 09:36:54.791290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.791409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.791458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.791586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.791622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.791757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.791791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.791894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.791927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.792035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.792069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.792208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.792243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.792376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.792410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.792562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.792599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.792764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.792798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.792942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.792978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.793107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.793150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.793252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.793285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.793426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.793462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.793613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.793650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.793833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.793954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.793991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.794117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.794153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.794261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.794296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.794490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.794530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.794740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.794802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.794999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.795060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.795173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.795211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.795393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.795444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.795582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.795630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.795776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.795831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.795944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.795978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.796122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.796180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.796285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.796320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.964 [2024-11-17 09:36:54.796516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.964 [2024-11-17 09:36:54.796570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.964 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.796822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.796896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.797053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.797118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.797278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.797312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.797461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.797496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.797633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.797686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.797816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.797868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.798089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.798144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.798286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.798323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.798524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.798559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.798756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.798810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.798970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.799010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.799220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.799259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.799383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.799436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.799571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.799604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.799818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.799851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.800064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.800100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.800239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.800276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.800407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.800441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.800591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.800639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.800753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.800789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.800933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.800971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.801115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.801152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.801300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.801337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.801503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.801551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.801690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.801726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.801896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.802080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.802118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.802265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.802302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.802469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.802637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.802689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.802820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.802871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.803025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.803063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.803178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.803216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.803376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.803410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.803510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.803543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.803667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.803703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.803898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.965 [2024-11-17 09:36:54.803935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.965 qpair failed and we were unable to recover it. 00:36:49.965 [2024-11-17 09:36:54.804076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.804114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.804290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.804329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.804472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.804506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.804610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.804644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.804873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.804910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.805033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.805084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.805270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.805319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.805448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.805485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.805655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.805762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.805798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.805943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.805978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.806174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.806240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.806397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.806446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.806588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.806623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.806791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.806851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.807067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.807165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.807276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.807314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.807499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.807535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.807663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.807697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.807820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.807874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.808017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.808054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.808229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.808266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.808399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.808466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.808616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.808672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.808858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.808895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.809040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.809076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.809223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.809260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.809411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.809557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.809594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.809749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.809786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.809962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.809999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.810141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.810178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.810314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.810374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.810635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.810682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.810840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.810893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.811027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.811082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.811210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.966 [2024-11-17 09:36:54.811381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.966 [2024-11-17 09:36:54.811415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.966 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.811552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.811587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.811754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.811792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.811907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.811940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.812078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.812112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.812258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.812291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.812403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.812437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.812575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.812609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.812773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.812826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.812980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.813246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.813285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.813408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.813461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.813660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.813849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.813901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.814064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.814121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.814270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.814304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.814419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.814455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.814633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.814686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.814845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.814886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.815148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.815208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.815331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.815364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.815531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.815579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.815787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.815842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.816134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.816174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.816320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.816359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.816552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.816587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.816733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.816772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.816943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.816981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.817154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.817192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.817334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.817389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.817552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.817599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.817739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.817774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.817923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.817957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.818079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.818118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.818289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.818323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.818467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.818501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.818758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.818796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.818974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.819043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.967 qpair failed and we were unable to recover it. 00:36:49.967 [2024-11-17 09:36:54.819188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.967 [2024-11-17 09:36:54.819225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.819357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.819401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.819557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.819592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.819743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.819781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.819989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.820024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.820144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.820192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.820333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.820379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.820531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.820565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.820702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.820736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.820868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.820902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.821069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.821102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.821319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.821354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.821520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.821554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.821662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.821695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.821853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.821905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.822096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.822330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.822363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.822586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.822630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.822785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.822837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.823033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.823066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.823233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.823272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.823426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.823465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.823625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.823677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.823806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.823839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.823999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.824033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.824189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.824223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.824409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.824449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.824614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.824667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.824799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.824839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.825051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.825089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.825263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.825300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.825472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.825505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.825653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.825705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.825965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.826026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.826159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.826197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.826326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.826359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.826501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.826535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.826662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.968 [2024-11-17 09:36:54.826711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.968 qpair failed and we were unable to recover it. 00:36:49.968 [2024-11-17 09:36:54.826870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.826924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.827179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.827237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.827424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.827458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.827593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.827626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.827840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.827877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.827989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.828028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.828147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.828196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.828387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.828420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.828544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.828592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.828778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.828832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.829070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.829110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.829270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.829310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.829447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.829492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.829627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.829661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.829820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.829854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.830014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.830048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.830212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.830277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.830436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.830484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.830658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.830711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.830886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.830938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.831083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.831122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.831297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.831334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.831503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.831542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.831708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.831742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.831859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.831901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.832092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.832133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.832303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.832352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.832530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.832578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.832767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.832806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.832944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.832979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.833201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.833240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.833431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.833465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.833566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.969 [2024-11-17 09:36:54.833600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.969 qpair failed and we were unable to recover it. 00:36:49.969 [2024-11-17 09:36:54.833765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.833801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.833935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.833968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.834142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.834178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.834341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.834384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.834546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.834579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.834703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.834743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.834930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.834969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.835105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.835138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.835300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.835350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.835508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.835542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.835689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.835743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.835895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.835934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.836065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.836106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.836358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.836424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.836563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.836596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.836878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.836953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.837202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.837257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.837377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.837412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.837567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.837619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.837808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.837865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.838014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.838111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.838249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.838282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.838465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.838514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.838659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.838695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.838858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.839021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.839055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.839193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.839226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.839374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.839539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.839574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.839714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.839773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.839892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.839941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.840078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.840129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.840304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.840341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.840499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.840534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.840686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.840724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.840897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.840933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.841095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.841192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.970 [2024-11-17 09:36:54.841323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.970 [2024-11-17 09:36:54.841356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.970 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.841493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.841541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.841665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.841700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.841911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.841969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.842090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.842129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.842295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.842329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.842524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.842572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.842751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.842785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.842970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.843029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.843249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.843305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.843440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.843474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.843602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.843654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.843881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.843939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.844177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.844239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.844382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.844418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.844572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.844621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.844854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.844894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.845063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.845142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.845270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.845304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.845662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.845700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.845865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.845902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.846054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.846093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.846211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.846248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.846400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.846434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.846590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.846624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.846753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.846786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.846932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.846987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.847137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.847175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.847372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.847407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.847546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.847579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.847740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.847774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.847931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.847973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.848139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.848176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.848362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.848421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.848581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.848615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.848806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.848842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.849024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.971 [2024-11-17 09:36:54.849061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.971 qpair failed and we were unable to recover it. 00:36:49.971 [2024-11-17 09:36:54.849181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.849218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.849396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.849444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.849580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.849614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.849756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.849793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.849994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.850245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.850282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.850436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.850470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.850572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.850605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.850825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.850858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.851036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.851073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.851196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.851235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.851395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.851429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.851558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.851592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.851749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.851788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.851920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.851972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.852168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.852222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.852381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.852433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.852544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.852578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.852709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.852759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.852936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.852973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.853117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.853154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.853310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.853348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.853531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.853580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.853808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.853866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.854114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.854153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.854322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.854361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.854526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.854574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.854819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.854873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.855096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.855156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.855309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.855348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.855534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.855570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.855713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.855768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.855981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.856043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.856165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.856204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.856372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.856414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.856553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.856588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.856703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.856738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.856869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.972 [2024-11-17 09:36:54.856903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.972 qpair failed and we were unable to recover it. 00:36:49.972 [2024-11-17 09:36:54.857037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.857071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.857189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.857238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.857421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.857469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.857636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.857684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.857868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.857929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.858050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.858246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.858283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.858421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.858456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.858620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.858671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.858828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.858880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.859013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.859065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.859212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.859259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.859380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.859429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.859576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.859611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.859837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.859871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.860034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.860073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.860188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.860238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.860373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.860426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.860561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.860596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.860732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.860767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.860925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.860978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.861087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.861121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.861255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.861288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.861446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.861498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.861684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.861735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.861901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.861955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.862091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.862126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.862259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.862294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.862403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.862458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.862613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.862647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.862780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.862815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.862946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.862999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.863158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.863193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.863293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.863327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.863536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.863590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.863779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.863819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.864077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.973 [2024-11-17 09:36:54.864143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.973 qpair failed and we were unable to recover it. 00:36:49.973 [2024-11-17 09:36:54.864298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.864332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.864451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.864487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.864623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.864658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.864814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.864867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.864978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.865012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.865231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.865293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.865420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.865455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.865555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.865589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.865740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.865804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.865934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.865973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.866101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.866168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.866315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.866352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.866515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.866553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.866733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.866771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.866900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.866937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.867121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.867181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.867339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.867380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.867543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.867596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.867745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.867796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.867993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.868130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.868165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.868291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.868340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.868480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.868550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.868756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.868809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.868955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.869006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.869119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.869153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.869289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.869323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.869512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.869564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.869748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.869788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.870002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.870040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.870282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.870322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.870468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.870502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.870674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.870727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.974 qpair failed and we were unable to recover it. 00:36:49.974 [2024-11-17 09:36:54.870945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.974 [2024-11-17 09:36:54.871006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.871155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.871206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.871352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.871394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.871546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.871599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.871702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.871736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.871937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.872005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.872157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.872196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.872357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.872511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.872544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.872705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.872747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.872910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.872962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.873161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.873235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.873401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.873436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.873603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.873671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.873825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.873889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.874104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.874142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.874297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.874357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.874525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.874573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.874725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.874777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.874916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.874970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.875157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.875196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.875319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.875357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.875520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.875553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.875724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.875760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.875943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.876005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.876149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.876186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.876333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.876376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.876524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.876557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.876735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.876803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.876932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.876971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.877123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.877159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.877271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.877307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.877499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.877533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.877646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.877714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.877952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.878014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.878210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.878268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.878431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.878466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.878595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.975 [2024-11-17 09:36:54.878629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.975 qpair failed and we were unable to recover it. 00:36:49.975 [2024-11-17 09:36:54.878783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.878839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.878992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.879044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.879216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.879426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.879461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.879619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.879876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.879913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.880066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.880103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.880265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.880318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.880559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.880722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.880781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.881031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.881086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.881220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.881254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.881380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.881429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.881568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.881604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.881759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.881807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.881950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.881985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.882128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.882162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.882298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.882331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.882481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.882515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.882617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.882650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.882786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.882821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.882961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.882994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.883153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.883201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.883314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.883350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.883510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.883822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.883881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.884194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.884254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.884400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.884454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.884573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.884618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.884831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.884897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.885146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.885203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.885347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.885391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.885510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.885544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.885672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.885722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.885846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.885882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.886071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.886109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.886239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.886272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.976 [2024-11-17 09:36:54.886412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.976 [2024-11-17 09:36:54.886448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.976 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.886589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.886771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.886809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.886956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.886994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.887124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.887177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.887352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.887397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.887552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.887585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.887724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.887776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.887919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.887956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.888094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.888131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.888339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.888394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.888554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.888607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.888719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.888774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.888887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.888924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.889069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.889106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.889249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.889298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.889434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.889468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.889615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.889653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.889877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.889937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.890137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.890199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.890339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.890378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.890529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.890562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.890686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.890734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.890903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.890943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.891080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.891119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.891300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.891350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.891531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.891566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.891724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.891776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.891890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.891925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.892082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.892133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.892257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.892305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.892473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.892522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.892668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.892704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.892894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.892953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.893149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.893207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.893325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.893363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.893620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.893655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.893841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.977 [2024-11-17 09:36:54.893892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.977 qpair failed and we were unable to recover it. 00:36:49.977 [2024-11-17 09:36:54.894060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.894112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.894270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.894303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.894468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.894511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.894689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.894727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.894865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.894903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.895041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.895078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.895207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.895260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.895451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.895500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.895668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.895720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.895828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.895861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.896118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.896177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.896279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.896314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.896448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.896487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.896633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.896692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.896933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.896974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.897209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.897268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.897413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.897451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.897595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.897632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.897748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.897787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.897996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.898029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.898178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.898232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.898401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.898437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.898642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.898695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.898943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.898983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.899105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.899143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.899339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.899381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.899528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.899563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.899672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.899706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.899934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.899998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.900108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.900146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.900269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.900306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.900470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.900503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.900653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.900714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.978 [2024-11-17 09:36:54.900901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.978 [2024-11-17 09:36:54.900941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.978 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.901078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.901114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.901245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.901279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.901396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.901431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.901593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.901627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.901736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.901770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.901882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.901916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.902040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.902076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.902181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.902215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.902395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.902444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.902607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.902660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.902770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.902804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.902987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.903046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.903163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.903197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.903334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.903376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.903504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.903544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.903693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.903730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.903927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.904135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.904173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.904297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.904335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.904529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.904582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.904757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.904811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.904972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.905185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.905219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.905351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.905394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.905560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.905614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.905743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.905782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.905893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.905931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.906165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.906198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.906298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.906331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.906516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.906569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.906739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.906794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.906983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.907048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.907147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.907181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.907323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.907357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.907525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.907577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.907752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.907804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.979 [2024-11-17 09:36:54.907936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.979 [2024-11-17 09:36:54.907970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.979 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.908103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.908137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.908279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.908314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.908444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.908493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.908652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.908700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.908844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.908879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.909015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.909049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.909175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.909209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.909345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.909384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.909535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.909582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.909711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.909760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.909873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.909908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.910065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.910099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.910208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.910242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.910351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.910391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.910525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.910562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.910684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.910721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.910851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.910888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.911107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.911160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.911317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.911351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.911485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.911533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.911677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.911711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.911873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.911911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.912117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.912159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.912303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.912340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.912494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.912532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.912697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.912736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.912861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.912900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.913014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.913051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.913191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.913238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.913379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.913416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.913556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.913591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.913741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.913778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.914032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.914090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.914217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.914250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.914385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.914547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.914595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.914828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.914890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.915145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.915204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.915354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.915401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.915584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.980 [2024-11-17 09:36:54.915631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.980 qpair failed and we were unable to recover it. 00:36:49.980 [2024-11-17 09:36:54.915862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.915932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.916150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.916204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.916314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.916347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.916513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.916561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.916736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.916789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.916986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.917046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.917192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.917231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.917382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.917435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.917594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.917628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.917758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.917807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.917968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.918023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.918136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.918170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.918331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.918372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.918523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.918571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.918756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.918811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.918976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.919044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.919242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.919276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.919388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.919422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.919561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.919594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.919726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.919780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.919900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.919942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.920143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.920430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.920481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.920629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.920666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.920820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.920861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.920990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.921027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.921168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.921203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.921383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.921418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.921550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.921584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.921746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.921779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.921907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.921944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.922173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.922245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.922422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.922459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.922595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.922629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.922763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.922798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.922932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.922966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.923093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.923142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.923306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.923343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.923494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.923543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.923712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.923753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.923927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.981 [2024-11-17 09:36:54.923969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.981 qpair failed and we were unable to recover it. 00:36:49.981 [2024-11-17 09:36:54.924096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.982 [2024-11-17 09:36:54.924134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.982 qpair failed and we were unable to recover it. 00:36:49.982 [2024-11-17 09:36:54.924288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.982 [2024-11-17 09:36:54.924321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.982 qpair failed and we were unable to recover it. 00:36:49.982 [2024-11-17 09:36:54.924430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.982 [2024-11-17 09:36:54.924463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.982 qpair failed and we were unable to recover it. 00:36:49.982 [2024-11-17 09:36:54.924593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.982 [2024-11-17 09:36:54.924645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.982 qpair failed and we were unable to recover it. 00:36:49.982 [2024-11-17 09:36:54.924783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.982 [2024-11-17 09:36:54.924820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:49.982 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.924954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.924991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.925109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.925146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.925341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.925502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.925555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.925720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.925755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.925893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.925961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.926111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.926166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.926317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.926355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.926482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.926516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.926654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.926689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.926926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-11-17 09:36:54.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-11-17 09:36:54.927110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.927144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.927280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.927318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.927509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.927543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.927665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.927702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.927821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.927858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.928002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.928039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.928170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.928206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.928375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.928409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.928510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.928543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.928689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.928737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.928910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.928947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.929086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.929126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.929300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.929339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.929473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.929508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.929647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.929681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.929813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.929866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.930088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.930145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.930293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.930326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.930471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.930505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.930641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.930675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.930883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.930945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.931084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.931136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.931289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.931327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.931496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.931531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.931638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.931672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.931857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.931894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.932065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.932102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.932250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.932288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.932474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.932509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.932661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.932709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.932837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.932891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.933038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.933135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.933246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.933287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.933471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.933524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.933693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.933745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.933916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.933955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.934111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.934149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-11-17 09:36:54.934296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-11-17 09:36:54.934334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.934514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.934563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.934726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.934780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.934929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.934982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.935087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.935121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.935238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.935273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.935445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.935499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.935654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.935693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.935842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.935879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.936037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.936076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.936231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.936268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.936435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.936469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.936577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.936612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.936797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.936852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.936951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.936985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.937098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.937133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.937301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.937335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.937462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.937510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.937652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.937688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.937841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.937890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.938080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.938142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.938292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.938341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.938461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.938495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.938662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.938715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.938904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.938957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.939191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.939230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.939343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.939390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.939538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.939571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.939695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.939732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.939903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.939940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.940063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.940100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.940288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.940323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.940465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.940501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.940690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.940742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.940845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.940878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.941078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.941148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.941261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.941295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.941436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-11-17 09:36:54.941490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-11-17 09:36:54.941623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.941657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.941799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.941833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.941979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.942014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.942177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.942210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.942302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.942335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.942464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.942501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.942648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.942686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.942825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.942862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.943068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.943120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.943283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.943316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.943436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.943474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.943745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.943802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.943904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.943938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.944076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.944109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.944251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.944286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.944418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.944452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.944579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.944612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.944830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.944866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.945065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.945125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.945270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.945307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.945491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.945525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.945709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.945762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.945896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.945935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.946115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.946153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.946269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.946319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.946505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.946539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.946640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.946673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.946856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.946913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.947024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.947061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.947179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.947217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.947365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.947430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.947569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.947601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.947744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.947822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.948013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.948075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.948196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.948232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.948402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.948450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.948595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.948629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-11-17 09:36:54.948809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-11-17 09:36:54.948852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.949044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.949103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.949276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.949309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.949454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.949488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.949643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.949697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.949871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.949907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.950078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.950115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.950226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.950262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.950448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.950497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.950647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.950684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.950839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.950897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.951056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.951094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.951202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.951238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.951356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.951415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.951553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.951586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.951816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.951850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.951949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.952000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.952143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.952180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.952333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.952538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.952586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.952778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.952818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.952938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.952988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.953148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.953186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.953346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.953391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.953510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.953544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.953738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.953771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.953912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.953944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.954074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.954111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.954234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.954271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.954405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.954439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.954566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.954599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.954756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.954793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.954969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.955005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.955146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.955183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.955299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.955337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.955497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.955663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.955695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.955808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.955841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-11-17 09:36:54.955991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-11-17 09:36:54.956027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.956177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.956214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.956353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.956419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.956614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.956681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.956811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.956848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.957007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.957057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.957221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.957255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.957365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.957404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.957541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.957595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.957807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.957859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.957981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.958019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.958171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.958205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.958344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.958386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.958525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.958559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.958711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.958747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.958907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.958943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.959121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.959157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.959403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.959440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.959577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.959611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.959771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.959821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.959975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.960027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.960174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.960222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.960336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.960378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.960579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.960757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.960828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.960975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.961012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.961189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.961226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.961395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.961430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.961559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.961596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.961770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.961808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.961939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.961977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.962134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.962167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-11-17 09:36:54.962282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-11-17 09:36:54.962316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.962558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.962606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.962744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.962779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.962918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.962952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.963113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.963147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.963258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.963292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.963427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.963461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.963625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.963664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.963806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.963843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.963996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.964034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.964223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.964264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.964374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.964410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.964569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.964617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.964729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.964783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.964931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.964969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.965168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.965205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.965383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.965435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.965570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.965621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.965803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.965859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.966029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.966066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.966235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.966271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.966435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.966484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.966654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.966718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.966876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.966916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.967081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.967119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.967274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.967307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.967418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.967452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.967585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.967618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.967753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.967790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.967963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.967999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.968117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.968155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.968351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.968409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.968581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.968616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.968748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.968802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.968996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.969049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.969211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.969275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.969384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.969421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.969593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-11-17 09:36:54.969627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-11-17 09:36:54.969757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.969789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.969926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.969960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.970109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.970158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.970302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.970338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.970486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.970647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.970700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.970810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.970843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.970943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.970977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.971113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.971148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.971251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.971284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.971441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.971489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.971625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.971678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.971812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.971851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.972012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.972046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.972208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.972241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.972375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.972409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.972557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.972608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.972738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.972772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.972898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.972947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.973071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.973107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.973268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.973301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.973472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.973566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.973619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.973792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.973829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.974034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.974098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.974232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.974265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.974450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.974503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.974612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.974646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.974824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.974876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.974980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.975014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.975181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.975216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.975349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.975389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.975554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.975607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.975720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.975759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.975978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.976037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.976260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.976294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.976425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.976459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.976682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-11-17 09:36:54.976719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-11-17 09:36:54.976896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.976958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.977122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.977159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.977320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.977358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.977543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.977591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.977795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.977875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.978099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.978154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.978330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.978479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.978515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.978675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.978726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.978837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.978870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.979087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.979129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.979304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.979342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.979504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.979538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.979667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.979704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.979834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.979889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.980139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.980203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.980339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.980381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.980556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.980610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.980738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.980778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.980906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.980946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.981205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.981262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.981417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.981472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.981588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.981624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.981780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.981830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.981988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.982041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.982206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.982240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.982343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.982387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.982576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.982629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.982868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.982930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.983045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.983084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.983202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.983239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.983382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.983417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.983558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.983592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.983800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.983864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.984168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.984232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.984426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.984461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.984565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.984599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-11-17 09:36:54.984785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-11-17 09:36:54.984822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.984972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.985009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.985127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.985165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.985350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.985432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.985593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.985641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.985757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.986063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.986124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.986280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.986314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.986418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.986452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.986632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.986704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.986966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.987027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.987255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.987289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.987429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.987464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.987620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.987655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.987800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.987837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.988097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.988154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.988288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.988321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.988469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.988508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.988662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.988699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.988869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.988906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.989093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.989163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.989303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.989376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.989534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.989573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.989726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.989779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.989899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.989972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.990144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.990181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.990290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.990327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.990492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.990526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.990657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.990705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.990869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.990922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.991154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.991211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.991326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.991360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.991496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.991549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.991689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.991743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-11-17 09:36:54.991954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-11-17 09:36:54.991994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.992186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.992252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.992406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.992440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.992602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.992635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.992751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.992783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.993033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.993091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.993259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.993361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.993401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.993565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.993616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.993770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.993819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.993968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.994004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.994109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.994163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.994344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.994384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.994512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.994572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.994739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.994788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.995027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.995097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.995218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.995255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.995481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.995515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.995630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.995681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.995943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.996002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.996122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.996159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.996342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.996382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.996499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.996532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.996721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.996792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.996979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.997034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.997190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.997242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.997348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.997388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.997499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.997533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.997686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.997739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.997889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.998050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.998241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.998289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.998427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.998476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.998749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.998807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.998992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.999064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.999191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.999229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.999389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.999439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.999583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-11-17 09:36:54.999631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-11-17 09:36:54.999818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:54.999872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.000029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.000081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.000217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.000251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.000391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.000427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.000579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.000630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.000780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.000833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.001082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.001136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.001305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.001341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.001508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.001576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.001779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.001840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.002041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.002103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.002213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.002264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.002385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.002424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.002560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.002594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.002744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.002797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.003008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.003074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.003205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.003240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.003341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.003383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.003521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.003556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.003678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.003716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.003917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.003969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.004140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.004173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.004288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.004322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.004461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.004494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.004595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.004645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.004822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.004866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.005055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.005110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.005290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.005325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.005466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.005501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.005633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.005667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.005776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.005827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.005974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.006023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.006273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.006311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.006468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.006501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.006605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.006638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.006800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.006836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.006982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.007018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-11-17 09:36:55.007142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-11-17 09:36:55.007179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.007331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.007364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.007527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.007575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.007757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.007810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.007997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.008035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.008182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.008220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.008363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.008425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.008598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.008632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.008859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.008898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.009043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.009081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.009208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.009260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.009424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.009458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.009572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.009605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.009824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.009861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.010034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.010073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.010227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.010264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.010464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.010512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.010684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.010720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.010906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.010960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.011142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.011195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.011300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.011335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.011493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.011527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.011629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.011681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.011858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.011958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.012096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.012133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.012303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.012340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.012473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.012508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.012656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.012707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.012833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.012890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.013018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.013053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.013187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.013234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.013338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.013377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.013524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.013558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.013681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.013729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.013895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.013931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.014044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.014080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.014230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.014263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.014416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.014464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-11-17 09:36:55.014580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-11-17 09:36:55.014615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.014772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.014823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.014932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.014965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.015075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.015108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.015248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.015284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.015423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.015458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.015595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.015633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.015784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.015836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.016017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.016071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.016206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.016242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.016349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.016398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.016505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.016556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.016729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.016801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.016964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.017002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.017148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.017185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.017363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.017428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.017572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.017638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.017816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.017870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.018107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.018147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.018296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.018333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.018494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.018527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.018658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.018691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.018867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.018925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.019125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.019190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.019339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.019384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.019544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.019577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.019698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.019735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.019852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.019890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.019999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.020206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.020243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.020445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.020494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.020635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.020674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.020822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.020861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.021003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.021041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.021155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.021192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.021396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.021537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.021573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.021680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.275 [2024-11-17 09:36:55.021714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.275 qpair failed and we were unable to recover it. 00:36:50.275 [2024-11-17 09:36:55.021866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.021917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.022076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.022128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.022273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.022322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.022448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.022484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.022618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.022651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.022781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.022817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.022971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.023033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.023324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.023422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.023604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.023644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.023841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.023911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.024035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.024077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.024239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.024275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.024431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.024480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.024593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.024629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.024743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.024778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.024899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.024937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.025156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.025220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.025437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.025485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.025596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.025632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.025776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.025834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.026032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.026087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.026196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.026230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.026380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.026428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.026574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.026629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.026877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.026916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.027113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.027180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.027362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.027423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.027561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.027596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.027819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.027858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.028050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.028087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.028244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.028278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.028403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.028437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.028593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.028644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.028832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.276 [2024-11-17 09:36:55.028885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.276 qpair failed and we were unable to recover it. 00:36:50.276 [2024-11-17 09:36:55.029017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.029071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.029183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.029216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.029343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.029383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.029541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.029575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.029744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.029895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.029931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.030083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.030132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.030284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.030319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.030477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.030525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.030637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.030691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.030868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.030936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.031194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.031258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.031387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.031442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.031591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.031628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.031771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.031808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.031930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.031967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.032113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.032150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.032344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.032386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.032540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.032589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.032843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.032900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.033016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.033067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.033256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.033293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.033502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.033543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.033738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.033913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.033950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.034089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.034132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.034237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.034274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.034402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.034435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.034581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.034616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.034856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.034908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.035135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.035203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.035346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.035394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.035557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.035743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.035780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.036028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.036086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.036261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.036298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.036471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.036519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.277 [2024-11-17 09:36:55.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.277 [2024-11-17 09:36:55.036836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.277 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.037049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.037108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.037259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.037296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.037457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.037492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.037673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.037710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.037822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.037859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.038000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.038095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.038254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.038290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.038474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.038507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.038617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.038667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.038936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.039148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.039328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.039362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.039552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.039601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.039757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.039794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.039937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.040005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.040143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.040181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.040342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.040382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.040555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.040589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.040721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.040758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.040930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.040967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.041102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.041151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.041306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.041339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.041456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.041489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.041622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.041673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.041800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.041850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.041996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.042034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.042178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.042214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.042400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.042457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.042644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.042681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.042843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.042897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.043029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.043067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.043191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.043371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.043423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.043561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.043594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.043762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.043799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.043974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.044010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.044150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.044202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.278 qpair failed and we were unable to recover it. 00:36:50.278 [2024-11-17 09:36:55.044324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.278 [2024-11-17 09:36:55.044362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.044493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.044526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.044651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.044700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.044912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.044964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.045105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.045145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.045271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.045308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.045504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.045542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.045696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.045748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.045899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.045952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.046132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.046191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.046332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.046373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.046524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.046564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.046711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.046750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.046866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.046903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.047105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.047142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.047287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.047325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.047474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.047522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.047660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.047714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.047861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.047913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.048112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.048170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.048311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.048554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.048608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.048842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.048882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.049086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.049146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.049304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.049338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.049526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.049574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.049739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.049778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.049907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.049959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.050166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.050225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.050390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.050460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.050611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.050653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.050761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.050795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.050925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.050959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.051082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.051131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.051296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.051344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.051490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.051525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.051666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.051700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.279 [2024-11-17 09:36:55.051855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.279 [2024-11-17 09:36:55.051892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.279 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.052095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.052155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.052305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.052345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.052525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.052564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.052733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.052775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.053032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.053091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.053245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.053314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.053485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.053520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.053705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.053758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.053985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.054047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.054228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.054266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.054417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.054452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.054586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.054619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.054774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.054811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.054964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.055033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.055275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.055312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.055472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.055521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.055701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.055738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.055886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.055946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.056200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.056256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.056415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.056628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.056677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.056820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.056856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.057016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.057054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.057200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.057239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.057404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.057452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.057578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.057749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.057788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.057960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.057997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.058185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.058252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.058379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.058414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.058519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.058553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.058699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.058736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.058877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.058932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.059067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.059106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.059283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.059321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.059458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.059493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.280 [2024-11-17 09:36:55.059629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.280 [2024-11-17 09:36:55.059664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.280 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.059782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.059820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.059934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.059972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.060124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.060162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.060307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.060344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.060488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.060525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.060709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.060762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.060882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.060936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.061121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.061172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.061337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.061378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.061524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.061560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.061671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.061706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.061891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.061929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.062061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.062115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.062286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.062322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.062464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.062513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.062627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.062664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.062858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.062916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.063114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.063174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.063324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.063359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.063475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.063509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.063662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.063699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.063871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.063908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.064088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.064125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.064288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.064342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.064507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.064555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.064745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.064813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.065024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.065084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.065248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.065283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.065418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.065454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.065568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.065603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.065719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.065753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.065914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.065969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.066219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.066274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.066405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.281 [2024-11-17 09:36:55.066440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.281 qpair failed and we were unable to recover it. 00:36:50.281 [2024-11-17 09:36:55.066625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.066678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.066868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.066947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.067159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.067195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.067334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.067376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.067520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.067555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.067681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.067747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.067928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.067968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.068088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.068126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.068240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.068277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.068451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.068486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.068590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.068623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.068806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.068843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.069009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.069046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.069183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.069234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.069421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.069470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.069715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.069858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.069925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.070083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.070143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.070315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.070363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.070548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.070596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.070772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.070849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.071047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.071097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.071298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.071335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.071467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.071501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.071641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.071681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.071873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.071942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.072180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.072218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.072402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.072441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.072601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.072649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.072863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.072917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.073187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.073247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.073398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.073433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.073538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.073572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.073705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.073738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.073873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.073909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.074119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.074180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.074328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.074365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.074536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.282 [2024-11-17 09:36:55.074572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.282 qpair failed and we were unable to recover it. 00:36:50.282 [2024-11-17 09:36:55.074790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.074828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.075004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.075043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.075186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.075224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.075390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.075444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.075582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.075616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.075756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.075790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.075942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.075979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.076155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.076193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.076345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.076412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.076526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.076565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.076713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.076761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.076897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.076937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.077051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.077089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.077241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.077279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.077450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.077486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.077619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.077653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.077819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.077872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.078002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.078055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.078190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.078225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.078358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.078407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.078587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.078641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.078822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.078914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.079130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.079189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.079318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.079352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.079463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.079496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.079604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.079637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.079754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.079789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.079957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.080020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.080130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.080163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.080269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.080303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.080495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.080740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.080779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.080912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.081216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.081254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.081393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.081444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.081604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.081653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.081825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.081885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.283 [2024-11-17 09:36:55.082135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.283 [2024-11-17 09:36:55.082191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.283 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.082328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.082362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.082550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.082711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.082745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.082856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.082889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.083020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.083054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.083176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.083216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.083358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.083409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.083519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.083572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.083697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.083734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.083916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.083953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.084183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.084257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.084446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.084494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.084650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.084703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.084906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.084965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.085203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.085237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.085342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.085383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.085551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.085591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.085767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.085820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.086043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.086106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.086303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.086338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.086486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.086521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.086677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.086726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.086898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.086935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.087126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.087188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.087333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.087376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.087506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.087540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.087690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.087742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.087866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.087905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.088104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.088172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.088322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.088359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.088496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.088531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.088707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.088755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.088907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.088944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.089145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.089196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.089324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.089378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.089541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.089589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.089764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.284 [2024-11-17 09:36:55.089834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.284 qpair failed and we were unable to recover it. 00:36:50.284 [2024-11-17 09:36:55.090082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.090138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.090317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.090354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.090484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.090518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.090681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.090714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.090829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.090886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.091046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.091099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.091252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.091289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.091445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.091479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.091582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.091621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.091768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.091820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.091955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.091988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.092185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.092222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.092341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.092382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.092520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.092554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.092760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.093004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.093044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.093219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.093258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.093386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.093437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.093562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.093611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.093803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.093856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.094012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.094064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.094164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.094199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.094364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.094419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.094614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.094680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.094804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.094843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.095092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.095160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.095276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.095313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.095501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.095535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.095707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.095772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.095943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.096007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.096184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.096220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.096382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.096417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.096550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.096584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.096740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.285 [2024-11-17 09:36:55.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.285 qpair failed and we were unable to recover it. 00:36:50.285 [2024-11-17 09:36:55.096995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.097059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.097215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.097253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.097447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.097482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.097663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.097830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.097867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.097987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.098200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.098239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.098417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.098466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.098613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.098650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.098825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.098863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.099121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.099159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.099333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.099382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.099516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.099549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.099648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.099682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.099791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.099831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.099967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.100001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.100149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.100182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.100319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.100352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.100472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.100509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.100657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.100710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.100873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.100928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.101041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.101075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.101235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.101269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.101462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.101517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.101668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.101706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.101954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.101992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.102167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.102239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.102385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.102437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.102622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.102688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.102870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.102911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.103124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.103187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.103302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.103339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.103525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.103572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.103724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.103777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.104000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.104037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.104205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.104244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.104398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.104432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.104582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.286 [2024-11-17 09:36:55.104635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.286 qpair failed and we were unable to recover it. 00:36:50.286 [2024-11-17 09:36:55.104854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.104911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.105069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.105127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.105260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.105295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.105489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.105538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.105696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.105735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.105927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.105986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.106117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.106213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.106406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.106440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.106580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.106623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.106784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.106821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.107014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.107080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.107225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.107261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.107434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.107483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.107623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.107671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.107889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.107963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.108078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.108116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.108251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.108290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.108429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.108463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.108579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.108615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.108779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.108816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.108929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.108966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.109135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.109169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.109373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.109407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.109529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.109577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.109724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.109759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.109909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.109962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.110065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.110102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.110279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.110313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.110426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.110462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.110627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.110659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.110766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.110799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.111036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.111104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.111233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.111272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.111422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.111471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.111631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.111678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.111811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.111850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.112011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.112107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.287 [2024-11-17 09:36:55.112262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.287 [2024-11-17 09:36:55.112298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.287 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.112483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.112517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.112661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.112697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.112854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.112887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.113057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.113093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.113265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.113302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.113494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.113543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.113711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.113752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.113902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.113940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.114053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.114090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.114213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.114247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.114402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.114450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.114592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.114628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.114807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.114845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.114997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.115046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.115214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.115253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.115394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.115570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.115606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.115859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.116115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.116178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.116327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.116364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.116526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.116561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.116732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.116797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.116960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.117013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.117226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.117261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.117420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.117474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.117618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.117671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.117821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.117868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.118011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.118225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.118274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.118396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.118432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.118610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.118663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.118936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.118996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.119156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.119196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.119363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.119404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.119558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.119607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.119769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.119822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.119964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.120019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.288 qpair failed and we were unable to recover it. 00:36:50.288 [2024-11-17 09:36:55.120202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.288 [2024-11-17 09:36:55.120241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.120406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.120441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.120566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.120603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.120752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.120788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.120938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.120975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.121124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.121160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.121306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.121342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.121491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.121538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.121710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.121774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.121938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.121993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.122141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.122193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.122330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.122364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.122560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.122608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.122751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.122787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.122900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.122935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.123049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.123084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.123274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.123324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.123509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.123547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.123659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.123696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.123863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.123930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.124061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.124113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.124223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.124257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.124412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.124451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.124702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.124737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.124842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.124887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.125052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.125086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.125238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.125285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.125457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.125494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.125639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.125688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.125800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.125835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.126019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.126073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.126175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.126210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.126360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.126419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.126623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.126677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.126924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.126963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.127197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.127257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.127392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.127427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.127559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.289 [2024-11-17 09:36:55.127592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.289 qpair failed and we were unable to recover it. 00:36:50.289 [2024-11-17 09:36:55.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.127788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.127978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.128015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.128165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.128201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.128456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.128490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.128627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.128694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.128958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.129016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.129201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.129261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.129396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.129431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.129601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.129654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.129814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.129879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.130014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.130057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.130179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.130216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.130361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.130422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.130558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.130591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.130720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.130758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.130911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.130950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.131100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.131136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.131304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.131340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.131502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.131550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.131737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.131777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.131907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.131946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.132098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.132137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.132302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.132355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.132529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.132563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.132776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.132835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.132961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.133010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.133161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.133198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.133348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.133406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.133527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.133563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.133697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.133749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.133939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.134000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.134242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.134296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.134395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.134430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.134612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.290 [2024-11-17 09:36:55.134666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.290 qpair failed and we were unable to recover it. 00:36:50.290 [2024-11-17 09:36:55.134941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.134993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.135227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.135262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.135424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.135459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.135601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.135635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.135798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.135835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.136005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.136042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.136185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.136222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.136357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.136437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.136598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.136647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.136840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.136895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.137076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.137130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.137297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.137332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.137458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.137507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.137672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.137711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.137863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.137900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.138046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.138257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.138300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.138436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.138472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.138627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.138685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.138943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.139199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.139257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.139392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.139621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.139675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.139832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.139888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.140096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.140156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.140309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.140342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.140488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.140522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.140699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.140737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.140944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.140981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.141253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.141459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.141494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.141660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.141693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.141803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.141856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.141989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.142029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.142227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.142323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.142462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.142496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.142605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.142639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.291 [2024-11-17 09:36:55.142857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.291 [2024-11-17 09:36:55.142915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.291 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.143093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.143148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.143326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.143364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.143526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.143561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.143710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.143749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.143984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.144040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.144215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.144252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.144402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.144462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.144605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.144641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.144856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.144933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.145140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.145197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.145333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.145380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.145521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.145555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.145672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.145708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.145839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.145890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.146007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.146046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.146205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.146242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.146404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.146438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.146546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.146580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.146728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.146787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.146990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.147031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.147180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.147219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.147413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.147446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.147587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.147620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.147759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.147792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.147966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.148005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.148176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.148214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.148354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.148412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.148549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.148584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.148737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.148785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.148942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.148996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.149224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.149285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.149444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.149478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.149613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.149663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.149798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.149850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.150082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.150119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.150290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.150327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.292 [2024-11-17 09:36:55.150517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.292 [2024-11-17 09:36:55.150565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.292 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.150794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.150862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.151109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.151167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.151301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.151336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.151477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.151530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.151685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.151752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.151987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.152049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.152269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.152310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.152498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.152533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.152702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.152771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.152977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.153041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.153300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.153359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.153497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.153531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.153682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.153750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.153940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.154023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.154230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.154293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.154435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.154470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.154593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.154631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.154850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.154906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.155175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.155232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.155383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.155434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.155597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.155631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.155771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.155810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.155985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.156069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.156245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.156283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.156440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.156476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.156667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.156725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.156928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.156984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.157160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.157219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.157355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.157407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.157541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.157575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.157720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.157754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.157889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.157922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.158055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.158089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.158224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.158257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.158391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.158434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.158577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.158635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.158806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.158856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.293 qpair failed and we were unable to recover it. 00:36:50.293 [2024-11-17 09:36:55.158965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.293 [2024-11-17 09:36:55.158999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.159138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.159172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.159325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.159378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.159501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.159539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.159707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.159761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.159927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.159982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.160142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.160309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.160342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.160522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.160561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.160706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.160774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.161042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.161100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.161266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.161300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.161418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.161454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.161615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.161673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.161946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.162005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.162125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.162162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.162285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.162322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.162505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.162549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.162732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.162784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.162969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.163020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.163125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.163159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.163312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.163360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.163560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.163615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.163830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.163889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.164071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.164141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.164280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.164314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.164440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.164476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.164613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.164657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.164821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.164856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.164978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.165012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.165177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.165233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.165377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.165413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.165567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.165616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.294 [2024-11-17 09:36:55.165874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.294 [2024-11-17 09:36:55.165930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.294 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.166132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.166199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.166365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.166409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.166580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.166616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.166737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.166772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.166955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.167021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.167256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.167310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.167497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.167533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.167684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.167735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.167892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.167946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.168076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.168109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.168270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.168305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.168473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.168528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.168744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.168969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.169028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.169216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.169271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.169409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.169445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.169624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.169680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.169870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.169931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.170183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.170238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.170378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.170413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.170548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.170586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.170718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.170752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.170891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.170929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.171074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.171108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.171278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.171325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.171483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.171519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.171625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.171658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.171793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.171826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.171956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.171990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.172136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.172184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.172299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.172343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.172542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.172590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.172785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.172837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.173048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.173105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.173218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.173251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.173382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.173417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.173584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.295 [2024-11-17 09:36:55.173635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.295 qpair failed and we were unable to recover it. 00:36:50.295 [2024-11-17 09:36:55.173779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.173831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.173939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.173975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.174128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.174176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.174323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.174358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.174528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.174576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.174740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.174794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.174923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.174977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.175109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.175142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.175244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.175278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.175431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.175466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.175612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.175649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.175777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.175825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.175969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.176005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.176117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.176152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.176283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.176318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.176502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.176551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.176717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.176757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.177017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.177075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.177232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.177272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.177448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.177485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.177643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.177697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.177875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.177914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.178033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.178071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.178256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.178289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.178433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.178481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.178601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.178652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.178770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.178822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.179024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.179087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.179194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.179231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.179382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.179434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.179562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.179597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.179723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.179757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.179890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.179928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.180058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.180141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.180312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.180349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.180480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.180513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.180627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.296 qpair failed and we were unable to recover it. 00:36:50.296 [2024-11-17 09:36:55.180764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.296 [2024-11-17 09:36:55.180817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.180965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.181156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.181193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.181404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.181453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.181574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.181622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.181775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.181831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.181953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.181991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.182156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.182194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.182393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.182444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.182595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.182649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.182911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.182966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.183082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.183121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.183274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.183308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.183518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.183566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.183763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.183816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.184073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.184138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.184303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.184338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.184470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.184518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.184672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.184721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.184864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.184918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.185128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.185181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.185324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.185358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.185516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.185550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.185713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.185752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.185912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.185950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.186126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.186184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.186363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.186404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.186521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.186554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.186748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.186802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.186994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.187061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.187300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.187339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.187502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.187536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.187671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.187704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.187804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.187851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.188030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.188126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.188255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.188290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.188408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.188449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.297 qpair failed and we were unable to recover it. 00:36:50.297 [2024-11-17 09:36:55.188604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.297 [2024-11-17 09:36:55.188639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.188840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.188907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.189021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.189057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.189195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.189230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.189355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.189396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3141398 Killed "${NVMF_APP[@]}" "$@" 00:36:50.298 [2024-11-17 09:36:55.189594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.189647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.189830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.189885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.190101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.190197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:50.298 [2024-11-17 09:36:55.190337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.190378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.190553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:50.298 [2024-11-17 09:36:55.190593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.190787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:50.298 [2024-11-17 09:36:55.190853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.191080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.191141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.191302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.191337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:50.298 [2024-11-17 09:36:55.191493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.191529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.191666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.191718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.191959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.191993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.192256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.192314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.192453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.192487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.192591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.192627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.192895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.192962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.193155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.193217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.193359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.193400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.193501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.193534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.193699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.193758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.193968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.194025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.194200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.194237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.194351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.194394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.194548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.194582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.194720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.194774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.195006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.195107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.195231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.195268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.195454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.195503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.195652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.195722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.195851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.196015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.196054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.298 qpair failed and we were unable to recover it. 00:36:50.298 [2024-11-17 09:36:55.196180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.298 [2024-11-17 09:36:55.196219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.196372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.196429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.196544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.196578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3142076 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:50.299 [2024-11-17 09:36:55.196799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3142076 00:36:50.299 [2024-11-17 09:36:55.196866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3142076 ']' 00:36:50.299 [2024-11-17 09:36:55.197162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.197222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.299 [2024-11-17 09:36:55.197394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.197438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.197585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.299 [2024-11-17 09:36:55.197622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.197735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.299 [2024-11-17 09:36:55.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 09:36:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:50.299 [2024-11-17 09:36:55.197915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.197952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.198133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.198190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.198326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.198374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.198497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.198546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.198701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.198735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.198867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.198900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.199030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.199082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.199258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.199295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.199430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.199465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.199616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.199652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.199827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.199864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.200010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.200048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.200227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.200415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.200450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.200586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.200619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.200737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.200775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.200931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.200969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.201175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.201227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.201376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.201412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.201548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.201582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.299 qpair failed and we were unable to recover it. 00:36:50.299 [2024-11-17 09:36:55.201722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.299 [2024-11-17 09:36:55.201775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.201915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.201968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.202146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.202185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.202331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.202384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.202539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.202573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.202696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.202765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.202911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.202964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.203117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.203155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.203275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.203314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.203484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.203519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.203666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.203731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.203924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.203963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.204080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.204117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.204272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.204309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.204472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.204524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.204699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.204753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.204966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.205025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.205207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.205245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.205405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.205440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.205580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.205614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.205725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.205763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.206013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.206068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.206263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.206311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.206483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.206517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.206617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.206670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.206795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.206847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.206967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.207006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.207156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.207195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.207520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.207554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.207684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.207719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.207887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.207952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.208143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.208181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.208317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.208381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.208546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.208581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.208724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.208758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.208924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.208962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.300 qpair failed and we were unable to recover it. 00:36:50.300 [2024-11-17 09:36:55.209135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.300 [2024-11-17 09:36:55.209173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.209328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.209362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.209494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.209542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.209768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.209898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.209938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.210082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.210120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.210261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.210298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.210439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.210474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.210634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.210668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.210828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.210865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.211079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.211115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.211260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.211298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.211471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.211510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.211669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.211706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.211861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.211959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.212222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.212387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.212422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.212524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.212558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.212689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.212728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.212975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.213013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.213218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.213252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.213457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.213493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.213608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.213654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.213792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.213826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.214050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.214088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.214258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.214301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.214441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.214475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.214591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.214627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.214782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.214820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.214962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.215000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.215146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.215184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.215340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.215380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.215511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.215544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.215673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.215709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.215895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.215932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.216055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.301 [2024-11-17 09:36:55.216094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.301 qpair failed and we were unable to recover it. 00:36:50.301 [2024-11-17 09:36:55.216234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.216271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.216427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.216478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.216596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.216633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.216863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.216902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.217049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.217233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.217271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.217418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.217454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.217561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.217597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.217889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.217964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.218225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.218289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.218461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.218506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.218690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.218913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.218982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.219229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.219266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.219406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.219442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.219594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.219648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.219774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.219809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.219943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.219978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.220107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.220144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.220306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.220340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.220481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.220516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.220646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.220684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.220790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.220824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.221028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.221092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.221237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.221276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.221420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.221485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.221648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.221703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.221968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.222042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.222226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.222265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.222458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.222499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.222628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.222958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.223023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.223191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.223242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.223443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.223477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.223580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.223615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.223757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.223792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.223980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.224034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.302 qpair failed and we were unable to recover it. 00:36:50.302 [2024-11-17 09:36:55.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.302 [2024-11-17 09:36:55.224210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.224374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.224420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.224531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.224567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.224694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.224746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.224910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.224945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.225071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.225107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.225232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.225281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.225468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.225522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.225723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.225785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.225903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.225938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.226188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.226430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.226465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.226624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.226685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.226794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.226829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.227034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.227093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.227205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.227242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.227388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.227447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.227636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.227714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.227902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.228132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.228171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.228320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.228360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.228517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.228572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.228722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.228774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.228969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.229033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.229171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.229206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.229403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.229438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.229560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.229629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.229772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.229813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.230014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.230080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.230238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.230273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.230460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.230509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.230641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.230690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.230828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.230876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.231057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.231097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.231256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.231294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.231463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.231500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.231670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.231711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.303 [2024-11-17 09:36:55.231830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.303 [2024-11-17 09:36:55.231868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.303 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.232053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.232092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.232244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.232282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.232479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.232528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.232671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.232711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.232890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.232929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.233053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.233092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.233210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.233248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.233399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.233434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.233571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.233606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.233789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.233840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.234072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.234110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.234257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.234295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.234453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.234493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.234685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.234760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.234884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.234921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.235170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.235225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.235333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.235384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.235537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.235592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.235825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.235861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.236062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.236120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.236278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.236312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.236437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.236473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.236678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.236852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.236890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.237107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.237145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.237263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.237302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.237482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.237516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.237623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.237686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.237824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.237861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.237989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.304 [2024-11-17 09:36:55.238041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.304 qpair failed and we were unable to recover it. 00:36:50.304 [2024-11-17 09:36:55.238196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.238237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.238406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.238476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.238602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.238641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.238812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.238916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.238957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.239113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.239151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.239299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.239338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.239531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.239702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.239738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.239947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.240013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.240265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.240326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.240507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.240556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.240814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.240873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.241100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.241158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.241334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.241385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.241537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.241571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.241720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.241755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.241899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.241933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.242053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.242091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.242231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.242266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.242396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.242432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.242540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.242575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.242748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.242783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.242946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.242981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.243115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.243149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.243328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.243384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.243537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.243586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.243751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.243787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.244008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.244043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.244240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.244277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.244441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.244477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.244639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.244693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.244852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.244906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.245088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.245140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.245283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.305 [2024-11-17 09:36:55.245318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.305 qpair failed and we were unable to recover it. 00:36:50.305 [2024-11-17 09:36:55.245487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.245522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.245659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.245714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.245984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.246052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.246255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.246290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.246433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.246468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.246580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.246616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.246785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.246837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.246993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.247047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.247160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.247194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.247308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.247347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.247510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.247559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.247673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.247707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.247869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.247904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.248062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.248095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.248200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.248235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.248374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.248418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.248604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.248642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.248840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.248907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.249093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.249131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.249282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.249316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.249441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.249476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.249611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.249661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.249857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.249894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.250049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.250087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.250211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.250248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.250396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.250578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.250612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.250754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.250788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.306 [2024-11-17 09:36:55.250948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.306 [2024-11-17 09:36:55.250985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.306 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.251131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.251169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.251309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.251536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.251815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.252002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.252042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.252166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.252206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.252325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.252360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.252531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.252581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.252755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.252790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.253115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.253295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.253333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.253477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.253511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.253623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.253657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.253797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.253831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.254055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.254113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.254261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.254299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.254462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.254496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.254607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.254641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.254791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.254830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.254942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.254980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.255133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.255171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.255363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.255405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.255532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.255581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.255736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.255785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.255925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.255966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.256117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.256156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.256309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.256347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.256503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.256536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-11-17 09:36:55.256668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-11-17 09:36:55.256702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.256821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.256859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.257010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.257047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.257218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.257327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.257364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.257539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.257588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.257704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.257742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.257924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.257981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.258157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.258195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.258321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.258362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.258562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.258612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.258750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.258790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.259017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.259077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.259203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.259241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.259407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.259442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.259577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.259611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.259709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.259742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.259958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.259992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.260154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.260192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.260359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.260442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.260559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.260597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.260790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.260843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.260950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.260984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.261106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.261159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.261322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.261356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.261529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.261568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.261687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.261725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.261962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.262000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.262257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.262325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.262481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.262515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.262663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.262700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.262843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.262925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.263182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.263350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.263391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.263569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.263622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.263823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.263878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.264086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.264147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.264280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.264314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.264466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.264501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.264721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-11-17 09:36:55.264960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-11-17 09:36:55.265015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.265320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.265364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.265513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.265548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.265678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.265717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.265888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.265976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.266245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.266301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.266499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.266534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.266718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.266756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.266884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.266923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.267136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.267355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.267399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.267517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.267552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.267698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.267734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.267967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.268027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.268280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.268345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.268526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.268560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.268687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.268721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.268852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.268904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.269129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.269188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.269389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.269444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.269561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.269598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.269782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.269840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.270065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.270123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.270267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.270309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.270466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.270515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.270691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.270745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.270950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.271018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.271200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.271238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.271386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.271421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.271570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.271604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.271743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.271777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.272030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.272127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.272278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.272316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.272484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.272519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.272668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.272737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.272974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.273031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.273282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.273343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.273549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.273584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.273705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.273742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.273930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.274000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.274120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-11-17 09:36:55.274159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-11-17 09:36:55.274304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.274342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.274522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.274571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.274741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.274777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.274959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.274997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.275112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.275151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.275291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.275325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.275451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.275485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.275621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.275656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.275823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.275861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.276040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.276090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.276229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.276267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.276420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.276470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.276664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.276719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.276905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.276958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.277110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.277163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.277335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.277380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.277490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.277524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.277672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.277708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.277877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.277921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.278112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.278308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.278345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.278469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.278505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.278684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.278723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.278856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.278910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.279116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.279306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.279342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.279480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.279534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.279711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.279767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.279924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.279977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.280150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.280229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.280406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.280442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.280695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.280755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.280964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.281004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.281151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.281201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.281420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.281455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.281625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.281785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.281822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.282024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.282061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.282227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.282361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.282402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.282546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-11-17 09:36:55.282580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-11-17 09:36:55.282753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.282808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.282967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.283019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.283273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.283350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.283532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.283568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.283701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.283735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.283955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.284025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.284176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.284215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.284384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.284420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.284556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.284591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.284759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.284813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.284948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.284989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.285190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.285228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.285422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.285457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.285569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.285603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.285784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.285821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.285984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.286041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.286157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.286196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.286340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.286408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.286561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.286712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.286764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.286908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.286945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.287140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.287215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.287374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.287427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.287568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.287603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.287784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.287838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.287981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.288019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.288160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.288198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.288344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.288393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.288556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.288590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.288766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.288818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.288974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.289011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.289159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.289196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-11-17 09:36:55.289313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-11-17 09:36:55.289351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.289539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.289589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.289745] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:50.596 [2024-11-17 09:36:55.289797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.289850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 [2024-11-17 09:36:55.289872] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:36:50.596 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.596 [2024-11-17 09:36:55.290015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.290056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.290196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.290250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.290383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.290420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.290549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.290583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.290806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.291003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.291040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.291182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.291236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.291402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.291459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.291581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.291617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.291793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.291831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.292037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.292078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.292309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.292348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.292532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.292582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.292713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.292753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.292937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.293011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.293272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.293458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.293493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.293638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.293687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.293854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.293918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.294076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.294129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.294263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.294299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.294430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.294481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.294684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.294740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.294895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.294935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.295132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.295200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.295320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.295358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.295540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.295575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.295730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.295780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.295920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.295957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.296115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.296164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.296303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.296338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.296545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.296601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.296764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.296807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.296969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.297025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.297167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.297212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.297343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.297384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.297547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-11-17 09:36:55.297582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-11-17 09:36:55.297702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.297757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.297957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.298020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.298229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.298289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.298401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.298454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.298575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.298612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.298763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.298801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.298920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.298960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.299137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.299176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.299324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.299386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.299524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.299558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.299729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.299796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.299961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.300138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.300173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.300314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.300348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.300509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.300548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.300726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.300764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.300968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.301126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.301164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.301336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.301382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.301516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.301552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.301686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.301739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.301876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.301914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.302060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.302244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.302426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.302465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.302660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.302712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.302859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.302913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.303064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.303104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.303292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.303330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.303526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.303561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.303769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.303823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.304051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.304090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.304267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.304302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.304454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.304489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.304624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.304673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.304963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.305023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.305262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.305322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.305516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.305557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.305755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.305817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.306039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.306098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.306258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-11-17 09:36:55.306296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-11-17 09:36:55.306506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.306555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.306775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.306830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.307060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.307299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.307333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.307451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.307486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.307666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.307716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.307860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.307898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.308150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.308189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.308407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.308445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.308565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.308610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.308725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.308764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.309027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.309086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.309221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.309255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.309388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.309551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.309600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.309782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.309822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.309946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.309984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.310185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.310224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.310404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.310472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.310591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.310627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.310844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.310904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.311124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.311163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.311355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.311431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.311570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.311620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.311881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.311962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.312123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.312160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.312330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.312377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.312503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.312556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.312675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.312714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.312855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.312892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.313067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.313105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.313263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.313317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.313493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.313530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.313665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.313704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.313852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.313899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.314109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.314175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.314310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.314354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.314519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.314568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.314731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.314771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.314974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.315047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.315257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-11-17 09:36:55.315295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-11-17 09:36:55.315456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.315505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.315679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.315734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.315927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.315965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.316126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.316224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.316381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.316432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.316569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.316603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.316735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.316769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.316879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.316916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.317041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.317092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.317272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.317326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.317515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.317565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.317726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.317775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.317945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.317985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.318104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.318142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.318271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.318309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.318472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.318507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.318637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.318686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.318874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.318928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.319117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.319165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.319298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.319338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.319526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.319581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.319772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.319929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.319976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.320158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.320196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.320316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.320353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.320544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.320580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.320744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.320782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.320993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.321055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.321166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.321203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.321394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.321432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.321605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.321660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.321896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.321938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.322093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.322132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.322258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.322293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.322435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.322472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.322605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.322672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-11-17 09:36:55.322870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-11-17 09:36:55.322911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.323063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.323102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.323275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.323310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.323477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.323513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.323637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.323706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.323918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.323978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.324130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.324176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.324320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.324385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.324523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.324558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.324708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.324878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.324913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.325046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.325080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.325254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.325325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.325527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.325577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.325746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.325923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.325963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.326219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.326299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.326480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.326515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.326623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.326658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.326833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.326900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.327090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.327151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.327297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.327334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.327493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.327529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.327687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.327722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.327989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.328056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.328212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.328263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.328406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.328447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.328560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.328594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.328738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.328790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.328964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.329002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.329201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.329238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.329388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.329595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.329667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.329997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.330050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.330202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.330241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.330414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.330449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.330584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.330620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.330787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.330825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.331050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.331088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.331255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.331290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-11-17 09:36:55.331512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-11-17 09:36:55.331547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.331681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.331735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.331919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.331989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.332133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.332171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.332339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.332385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.332563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.332614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.332906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.332970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.333135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.333174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.333321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.333378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.333562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.333597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.333831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.333891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.334144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.334340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.334392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.334589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.334638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.334806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.334862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.335017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.335076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.335202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.335401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.335436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.335560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.335609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.335796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.335846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.336034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.336070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.336201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.336235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.336391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.336427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.336575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.336739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.336777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.336893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.336936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.337117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.337155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.337323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.337377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.337523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.337558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.337657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.337691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.337882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.337937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.338186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.338267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.338437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.338474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.338593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.338628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.338817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.338855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.339006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.339065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.339199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.339233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.339420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.339455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.339662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.339718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.339883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.339924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.340081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-11-17 09:36:55.340120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-11-17 09:36:55.340274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.340313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.340489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.340525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.340707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.340745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.340904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.340942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.341118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.341157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.341359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.341400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.341542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.341576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.341779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.341834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.342042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.342082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.342221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.342259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.342430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.342466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.342660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.342710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.342867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.342926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.343095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.343135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.343311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.343359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.343588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.343623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.343872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.343934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.344194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.344251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.344417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.344453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.344562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.344598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.344778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.344832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.345060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.345100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.345271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.345306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.345469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.345505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.345637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.345696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.345811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.345849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.346040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.346101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.346242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.346280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.346470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.346520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.346680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.346722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.346849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.346888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.347039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.347078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.347271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.347389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.347445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.347587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.347622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.347785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.347852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.348007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.348061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.348199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.348235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.348409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.348446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.348592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.348658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.348871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-11-17 09:36:55.348911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-11-17 09:36:55.349062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.349100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.349251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.349289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.349442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.349617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.349655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.349811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.349865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.350017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.350070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.350222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.350271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.350442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.350479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.350614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.350649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.350827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.350894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.351083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.351122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.351266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.351303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.351475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.351512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.351686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.351739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.351934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.351988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.352143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.352182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.352333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.352394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.352526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.352560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.352779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.352831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.352938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.352972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.353220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.353284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.353437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.353491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.353623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.353675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.353820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.353864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.353972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.354007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.354115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.354150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.354282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.354317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.354460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.354510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.354726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.354774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.354921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.354970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.355221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.355258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.355398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.355435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.355577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.355612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.355772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.355831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.355991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.356048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.356201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.356250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.356396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.356433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.356617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.356675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.356820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.356857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.357021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.357055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.357190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.603 [2024-11-17 09:36:55.357223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.603 qpair failed and we were unable to recover it. 00:36:50.603 [2024-11-17 09:36:55.357357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.357397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.357519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.357553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.357684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.357725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.357840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.357876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.357991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.358026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.358166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.358204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.358343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.358397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.358548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.358597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.358725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.358760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.358930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.358965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.359099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.359134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.359247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.359286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.359405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.359442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.359587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.359623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.359781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.359834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.360055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.360115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.360241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.360279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.360451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.360487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.360625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.360664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.360889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.360927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.361112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.361150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.361286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.361324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.361509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.361565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.361718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.361767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.361908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.361978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.362088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.362125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.362268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.362303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.362495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.362550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.362754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.362826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.362989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.363087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.363262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.363300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.363470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.363506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.363612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.363647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.363808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.363846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.364027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.364065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.364194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.364232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.604 [2024-11-17 09:36:55.364426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.604 [2024-11-17 09:36:55.364461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.604 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.364572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.364606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.364745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.364798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.364964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.365018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.365227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.365279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.365422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.365458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.365563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.365599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.365790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.365828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.365969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.366006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.366115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.366153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.366333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.366393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.366513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.366550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.366729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.366783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.367049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.367116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.367231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.367266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.367362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.367406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.367582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.367713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.367747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.367886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.367922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.368163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.368221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.368391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.368572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.368781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.368834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.368984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.369035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.369203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.369238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.369456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.369512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.369638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.369686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.369942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.370002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.370221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.370280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.370425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.370491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.370687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.370740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.370916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.370990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.371150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.371184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.371302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.371337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.371498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.371553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.371810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.371864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.372056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.372097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.372279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.372314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.372463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.372501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.372671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.372725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.372959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.372995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.605 qpair failed and we were unable to recover it. 00:36:50.605 [2024-11-17 09:36:55.373263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.605 [2024-11-17 09:36:55.373328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.373487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.373522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.373629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.373664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.373819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.373873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.374010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.374044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.374187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.374226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.374411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.374460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.374643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.374692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.374847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.374883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.375011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.375064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.375199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.375234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.375374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.375409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.375602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.375659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.375818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.375853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.375962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.375997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.376154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.376189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.376293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.376328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.376465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.376508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.376681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.376721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.376932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.377019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.377166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.377203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.377341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.377387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.377522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.377557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.377716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.377925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.377976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.378118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.378158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.378295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.378497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.378549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.378708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.378761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.379022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.379088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.379244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.379282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.379446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.379480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.379591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.379625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.379784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.379818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.379969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.380024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.380133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.380167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.380494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.380529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.380655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.380703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.380824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.380880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.381057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.381093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.606 qpair failed and we were unable to recover it. 00:36:50.606 [2024-11-17 09:36:55.381287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.606 [2024-11-17 09:36:55.381322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.381490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.381540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.381733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.381774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.381933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.382005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.382123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.382158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.382291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.382325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.382491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.382544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.382704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.382763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.382992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.383063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.383284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.383325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.383517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.383557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.383694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.383730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.383992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.384056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.384165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.384199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.384330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.384398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.384555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.384608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.384794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.384845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.385090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.385146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.385361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.385404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.385530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.385564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.385684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.385719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.385831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.385866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.385978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.386014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.386183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.386221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.386346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.386409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.386542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.386591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.386758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.386793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.386931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.386966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.387086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.387121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.387288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.387323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.387444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.387491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.387602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.387636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.387796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.387830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.387975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.388157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.388192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.388354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.388396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.388519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.388568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.388734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.388783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.388938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.388976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.389102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.607 [2024-11-17 09:36:55.389149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.607 qpair failed and we were unable to recover it. 00:36:50.607 [2024-11-17 09:36:55.389287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.389321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.389472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.389521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.389650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.389686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.389824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.389859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.390029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.390064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.390172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.390206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.390315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.390360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.390515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.390656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.390706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.390841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.390889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.391063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.391100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.391234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.391396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.391435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.391559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.391597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.391717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.391752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.391914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.391948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.392087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.392121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.392223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.392257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.392411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.392447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.392590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.392627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.392783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.392818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.392959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.392993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.393123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.393157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.393313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.393383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.393503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.393545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.393653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.393688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.393830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.393864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.394032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.394067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.394171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.394207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.394344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.394395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.394556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.394606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.394763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.394801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.394941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.394977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.395138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.395173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.395311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.395357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.395485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.395521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.608 [2024-11-17 09:36:55.395656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.608 [2024-11-17 09:36:55.395705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.608 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.395851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.395968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.396003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.396134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.396168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.396327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.396378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.396492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.396527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.396692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.396860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.396895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.397018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.397068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.397219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.397254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.397397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.397432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.397542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.397577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.397754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.397788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.397924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.397958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.398064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.398100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.398243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.398282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.398439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.398475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.398576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.398611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.398784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.398818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.398944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.398994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.399137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.399174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.399293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.399329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.399491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.399527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.399665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.399701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.399816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.399852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.399981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.400017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.400181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.400216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.400345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.400397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.400534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.400574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.400688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.400722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.400850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.400884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.401043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.401079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.401192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.401227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.401344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.401405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.401586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.401622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.401753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.401788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.401897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.401933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.402068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.402102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.402235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.402270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.402412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.402447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.402578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.402613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.609 qpair failed and we were unable to recover it. 00:36:50.609 [2024-11-17 09:36:55.402781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.609 [2024-11-17 09:36:55.402816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.402961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.402995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.403131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.403165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.403318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.403376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.403496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.403533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.403671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.403706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.403848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.403884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.404063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.404113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.404242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.404280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.404424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.404460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.404595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.404629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.404790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.404824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.404927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.404962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.405101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.405137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.405273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.405323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.405476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.405514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.405631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.405666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.405806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.405841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.405969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.406004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.406151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.406187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.406342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.406401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.406563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.406612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.406760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.406795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.406939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.407053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.407088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.407230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.407266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.407405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.407442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.407548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.407589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.407729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.407764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.407896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.407945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.408101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.408139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.408248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.408283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.408418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.408454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.408592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.408642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.408799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.408835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.408982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.409019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.409156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.409191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.409349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.409394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.409505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.409541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.409650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.409686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.409821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.610 [2024-11-17 09:36:55.409856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.610 qpair failed and we were unable to recover it. 00:36:50.610 [2024-11-17 09:36:55.409999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.410033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.410164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.410198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.410309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.410347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.410518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.410568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.410675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.410712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.410819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.410855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.410985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.411020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.411206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.411341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.411384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.411523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.411559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.411699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.411737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.411898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.411933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.412040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.412074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.412184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.412220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.412393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.412428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.412583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.412632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.412754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.412789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.412931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.412965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.413120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.413155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.413313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.413348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.413484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.413519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.413660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.413695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.413810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.413849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.414080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.414221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.414362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.414575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.414722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.414890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.414994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.415029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.415167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.415201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.415314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.415350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.415503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.415538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.415647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.415682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.415845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.415880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.416011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.416046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.416158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.416197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.416318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.416354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.416481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.416517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.416660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.416696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.416828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.611 [2024-11-17 09:36:55.416863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.611 qpair failed and we were unable to recover it. 00:36:50.611 [2024-11-17 09:36:55.417002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.417038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.417170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.417206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.417382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.417419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.417560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.417596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.417761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.417795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.417949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.417985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.418137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.418186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.418359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.418407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.418523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.418558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.418675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.418712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.418864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.418900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.419040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.419086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.419274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.419415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.419451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.419603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.419653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.419800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.419946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.419981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.420090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.420124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.420274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.420324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.420488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.420538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.420665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.420704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.420813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.420848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.420981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.421016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.421145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.421179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.421345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.421409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.421546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.421595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.421757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.421795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.421934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.421971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.422123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.422158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.422275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.422314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.422475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.422512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.422622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.422668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.422774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.422809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.422948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.422983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.423115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.423150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.423326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.423392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.423515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.612 [2024-11-17 09:36:55.423552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.612 qpair failed and we were unable to recover it. 00:36:50.612 [2024-11-17 09:36:55.423713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.423763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.423905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.423941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.424084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.424119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.424252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.424286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.424461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.424496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.424637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.424686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.424821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.424857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.424962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.424997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.425138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.425173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.425358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.425418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.425557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.425606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.425787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.425895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.425929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.426070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.426104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.426277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.426322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.426511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.426566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.426730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.426769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.426881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.426917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.427044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.427079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.427255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.427404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.427439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.427620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.427677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.427799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.427837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.427951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.427987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.428128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.428162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.428296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.428331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.428485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.428521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.428637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.428848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.428881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.429030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.429064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.429170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.429204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.429340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.429388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.429531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.429565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.429731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.429766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.429928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.429962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.430098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.430132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.430236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.430270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.430395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.430430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.430567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.430601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.430734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.430770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.430930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.613 [2024-11-17 09:36:55.430965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.613 qpair failed and we were unable to recover it. 00:36:50.613 [2024-11-17 09:36:55.431088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.431124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.431240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.431274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.431411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.431446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.431582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.431616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.431765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.431799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.431933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.431966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.432097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.432132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.432279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.432314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.432469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.432505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.432651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.432686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.432823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.432857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.432963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.432997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.433144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.433192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.433334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.433383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.433541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.433596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.433742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.433779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.433887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.433922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.434058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.434093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.434255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.434305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.434445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.434485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.434617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.434658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.434797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.434833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.435003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.435037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.435173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.435208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.435346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.435391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.435551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.435600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.435774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.435810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.435911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.435946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.436070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.436106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.436204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.436239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.436419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.436470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.436594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.436643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.436830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.436996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.437164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.437463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.437632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.437776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.437919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.437953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.614 [2024-11-17 09:36:55.438092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.614 [2024-11-17 09:36:55.438131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.614 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.438290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.438338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.438501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.438540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.438680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.438716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.438857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.438894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.439008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.439043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.439179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.439214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.439349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.439394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.439536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.439571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.439727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.439763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.439870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.439913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.440061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.440095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.440238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.440272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.440451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.440500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.440621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.440664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.440827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.440872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.440997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.441033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.441169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.441205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.441345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.441388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.441502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.441539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.441709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.441759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.441953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.441990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.442168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.442214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.442361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.442405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.442520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.442556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.442748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.442796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.442942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.442979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.443084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.443120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.443295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.443331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.443463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.443513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.443630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.443667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.443810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.443845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.443959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.443993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.444133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.444168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.444330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.444365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.444481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.444515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.444653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.444688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.444823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.444858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.444992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.445028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.445181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.445230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.445394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.445443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.615 [2024-11-17 09:36:55.445647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.615 [2024-11-17 09:36:55.445683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.615 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.445785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.445820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.445926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.445960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.446097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.446293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.446328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.446473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.446508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.446660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.446709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.446882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.446919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.447062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.447097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.447231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.447266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.447407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.447443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.447573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.447607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.447746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.447781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.447921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.447961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.448127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.448161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.448265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.448299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.448459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.448510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.448642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.448692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.448848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.448886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.448987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.449022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.449161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.449195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.449336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.449379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.449547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.449583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.449710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.449748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.449890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.449926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.450059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.450095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.450231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.450267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.450378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.616 [2024-11-17 09:36:55.450419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.450454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.450561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.450597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.450744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.450779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.450921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.450955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.451086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.451120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.451251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.451285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.451465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.451515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.451633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.451680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.451810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.451844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.616 [2024-11-17 09:36:55.451987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.616 [2024-11-17 09:36:55.452021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.616 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.452127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.452162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.452314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.452385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.452505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.452542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.452701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.452736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.452871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.452906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.453068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.453102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.453205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.453244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.453381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.453430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.453640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.453772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.453809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.453947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.454089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.454124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.454260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.454295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.454432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.454482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.454636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.454683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.454851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.454887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.455006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.455048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.455147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.455181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.455363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.455422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.455543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.455580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.455698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.455736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.455871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.455907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.456034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.456069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.456210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.456244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.456385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.456444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.456579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.456613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.456780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.456814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.456920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.456955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.457087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.457122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.457255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.457306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.457432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.457470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.457588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.457625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.457776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.457822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.457982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.458016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.458128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.458163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.458313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.458381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.458528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.458564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.458712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.458747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.458879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.458914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.617 qpair failed and we were unable to recover it. 00:36:50.617 [2024-11-17 09:36:55.459019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.617 [2024-11-17 09:36:55.459055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.459196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.459231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.459359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.459417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.459573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.459622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.459768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.459817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.459975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.460011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.460173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.460330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.460381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.460491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.460526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.460696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.460730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.460869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.460903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.461039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.461077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.461215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.461250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.461430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.461479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.461620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.461661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.461767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.461802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.461912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.461946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.462111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.462147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.462282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.462317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.462454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.462504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.462647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.462690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.462820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.462857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.462991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.463026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.463186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.463221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.463321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.463361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.463482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.463517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.463660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.463694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.463862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.463896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.464073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.464122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.464264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.464301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.464449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.464505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.464625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.464668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.464813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.464848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.464963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.465000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.465140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.465176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.465311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.465364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.465497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.465534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.465686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.465723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.465863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.465898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.466035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.466070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.466209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.618 [2024-11-17 09:36:55.466243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.618 qpair failed and we were unable to recover it. 00:36:50.618 [2024-11-17 09:36:55.466424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.466459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.466578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.466617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.466746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.466781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.466904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.466939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.467077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.467112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.467250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.467284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.467396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.467431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.467545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.467579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.467766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.467824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.467972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.468009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.468159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.468194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.468290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.468325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.468474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.468524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.468670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.468721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.468869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.468906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.469018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.469053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.469196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.469234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.469389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.469426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.469539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.469573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.469708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.469742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.469858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.469893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.470043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.470079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.470224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.470260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.470399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.470435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.470575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.470610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.470735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.470771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.470884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.470921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.471043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.471079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.471243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.471277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.471419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.471475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.471622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.471669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.471811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.471846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.472006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.472042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.472175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.472209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.472335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.472395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.472598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.472765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.472800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.472913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.472948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.473062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.473097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.473231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.473266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.473382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.619 [2024-11-17 09:36:55.473429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.619 qpair failed and we were unable to recover it. 00:36:50.619 [2024-11-17 09:36:55.473551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.473590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.473797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.473847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.474001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.474039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.474203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.474239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.474385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.474429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.474542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.474577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.474744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.474778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.474910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.474944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.475090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.475125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.475306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.475343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.475492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.475542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.475720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.475769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.475937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.475974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.476142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.476178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.476294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.476328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.476491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.476540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.476685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.476743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.476912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.476946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.477061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.477096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.477203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.477237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.477437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.477487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.477603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.477640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.477788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.477823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.477959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.477993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.478094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.478131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.478293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.478328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.478485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.478520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.478625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.478667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.478822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.478877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.479015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.479217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.479397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.479444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.479577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.479611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.479728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.479762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.479894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.479929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.480032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.480067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.480212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.480248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.480361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.480414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.480581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.480615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.480738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.480772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.480904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.620 [2024-11-17 09:36:55.480938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.620 qpair failed and we were unable to recover it. 00:36:50.620 [2024-11-17 09:36:55.481039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.481073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.481186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.481223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.481390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.481574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.481624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.481737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.481773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.481902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.481936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.482101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.482135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.482264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.482299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.482458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.482509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.482665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.482714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.482824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.482860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.482999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.483034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.483173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.483207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.483366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.483431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.483546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.483582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.483711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.483745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.483909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.483944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.484085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.484119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.484298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.484347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.484501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.484538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.484652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.484686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.484787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.484821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.484950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.484984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.485122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.485156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.485274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.485310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.485431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.485471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.485616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.485662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.485781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.485822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.485961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.485997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.486127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.486162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.486288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.486324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.486473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.486522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.486669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.486706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.486846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.486881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.487046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.487081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.487203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.621 [2024-11-17 09:36:55.487252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.621 qpair failed and we were unable to recover it. 00:36:50.621 [2024-11-17 09:36:55.487438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.487474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.487615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.487650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.487773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.487807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.487942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.487977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.488126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.488160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.488283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.488317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.488453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.488611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.488648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.488833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.488945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.488981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.489101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.489135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.489281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.489330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.489459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.489495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.489633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.489678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.489814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.489849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.489949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.489983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.490090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.490124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.490263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.490297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.490421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.490461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.490579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.490615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.490789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.490824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.490948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.490983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.491118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.491152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.491288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.491322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.491443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.491480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.491646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.491699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.491884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.491932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.492073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.492108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.492215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.492249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.492394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.492429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.492558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.492592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.492700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.492904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.492938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.493103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.493140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.493266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.493492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.493542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.493727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.493766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.493933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.493970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.494083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.494119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.494249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.494286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.622 [2024-11-17 09:36:55.494471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.622 [2024-11-17 09:36:55.494521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.622 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.494639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.494681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.494865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.494902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.495043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.495080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.495248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.495284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.495409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.495447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.495586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.495736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.495771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.495899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.495934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.496101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.496138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.496271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.496307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.496461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.496498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.496647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.496687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.496826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.496862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.496962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.496998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.497165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.497201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.497330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.497390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.497537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.497748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.497786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.497891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.497927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.498066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.498101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.498206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.498243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.498427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.498463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.498588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.498626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.498762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.498798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.498935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.498970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.499134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.499169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.499313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.499349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.499512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.499561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.499699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.499744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.499886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.499921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.500083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.500118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.500289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.500325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.500484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.500518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.500670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.500705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.500819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.500860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.501001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.501036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.501143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.501178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.501336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.501397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.501544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.501593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.501772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.501809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.623 qpair failed and we were unable to recover it. 00:36:50.623 [2024-11-17 09:36:55.501970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.623 [2024-11-17 09:36:55.502006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.502138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.502181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.502344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.502395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.502515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.502551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.502709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.502760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.502878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.502916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.503084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.503120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.503228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.503264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.503407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.503445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.503599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.503647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.503832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.503981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.504016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.504147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.504182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.504277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.504312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.504472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.504523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.504629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.504677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.504816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.504852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.504995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.505037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.505210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.505245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.505347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.505389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.505539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.505575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.505734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.505773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.505884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.505931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.506073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.506108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.506222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.506257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.506395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.506430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.506555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.506604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.506753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.506789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.506937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.506973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.507116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.507151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.507283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.507317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.507452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.507486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.507649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.507691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.507861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.507896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.508034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.508069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.508208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.508243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.508381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.508427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.508568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.508617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.508735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.508772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.508889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.508925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.509042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.624 [2024-11-17 09:36:55.509079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.624 qpair failed and we were unable to recover it. 00:36:50.624 [2024-11-17 09:36:55.509236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.509286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.509424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.509473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.509602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.509639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.509794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.509830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.509973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.510007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.510117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.510153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.510264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.510300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.510458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.510495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.510655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.510701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.510801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.510837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.510970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.511006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.511141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.511176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.511285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.511320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.511487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.511537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.511697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.511739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.511885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.511922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.512036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.512077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.512201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.512237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.512380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.512426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.512530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.512564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.512692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.512887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.512923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.513049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.513084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.513201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.513237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.513380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.513427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.513564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.513598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.513706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.513741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.513872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.513907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.514047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.514082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.514196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.514232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.514380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.514425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.514570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.514605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.514744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.514780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.514888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.514924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.515029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.515064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.515197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.625 [2024-11-17 09:36:55.515232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.625 qpair failed and we were unable to recover it. 00:36:50.625 [2024-11-17 09:36:55.515347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.515391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.515505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.515539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.515649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.515689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.515794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.515830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.515970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.516005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.516141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.516176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.516283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.516318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.516471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.516505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.516649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.516694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.516794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.516829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.516988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.517023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.517159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.517194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.517337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.517380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.517524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.517559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.517663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.517700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.517867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.517903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.518015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.518050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.518194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.518230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.518387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.518445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.518567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.518604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.518733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.518777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.518924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.518961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.519073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.519109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.519216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.519252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.519365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.519416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.519549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.519598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.519774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.519812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.519934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.519984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.520160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.520198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.520339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.520390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.520570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.520671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.520710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.520816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.520851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.520971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.521009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.521160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.521196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.521331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.521374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.521520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.521555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.521694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.521730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.521842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.521879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.521991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.522027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.522160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.626 [2024-11-17 09:36:55.522196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.626 qpair failed and we were unable to recover it. 00:36:50.626 [2024-11-17 09:36:55.522311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.522347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.522504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.522539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.522681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.522720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.522842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.522879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.523048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.523084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.523222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.523258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.523399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.523435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.523569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.523604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.523758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.523796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.523905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.524103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.524141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.524271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.524307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.524452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.524488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.524622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.524658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.524801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.524836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.524974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.525010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.525178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.525216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.525348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.525393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.525551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.525601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.525752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.525796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.525974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.526106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.526141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.526304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.526339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.526501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.526536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.526642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.526688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.526831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.526867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.527038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.527075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.527189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.527237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.527350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.527394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.527552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.527587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.527731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.527767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.527877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.527913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.528058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.528095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.528240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.528276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.528389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.528430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.528536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.528570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.528712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.528747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.528852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.528886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.529027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.529062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.529198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.627 [2024-11-17 09:36:55.529233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.627 qpair failed and we were unable to recover it. 00:36:50.627 [2024-11-17 09:36:55.529377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.529416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.529541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.529590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.529713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.529750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.529886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.529921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.530060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.530096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.530230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.530265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.530394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.530456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.530613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.530650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.530785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.530821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.530958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.530994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.531162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.531196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.531308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.531345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.531503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.531540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.531707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.531743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.531850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.531886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.532055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.532227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.532413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.532568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.532720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.532869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.532975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.533010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.533170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.533206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.533316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.533351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.533523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.533560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.533699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.533735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.533902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.533938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.534076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.534112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.534274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.534310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.534483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.534518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.534623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.534659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.534799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.534835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.534938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.534973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.535122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.535158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.535259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.535296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.535411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.535449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.535584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.535618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.535746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.535782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.535950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.535987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.536094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.536130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.536292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.628 [2024-11-17 09:36:55.536328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.628 qpair failed and we were unable to recover it. 00:36:50.628 [2024-11-17 09:36:55.536486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.536521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.536631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.536665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.536812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.536847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.536948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.536983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.537159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.537194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.537363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.537429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.537565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.537599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.537738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.537773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.537883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.537920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.538061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.538096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.538238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.538273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.538426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.538476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.538600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.538649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.538811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.538862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.539003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.539042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.539185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.539222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.539355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.539401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.539539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.539574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.539697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.539739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.539914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.539950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.540066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.540102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.540229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.540265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.540437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.540473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.540616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.540733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.540770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.540910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.540945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.541080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.541115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.541228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.541263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.541402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.541438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.541585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.541635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.541789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.541828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.541991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.542027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.542147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.542184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.542350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.542395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.542509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.542544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.542702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.542753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.629 [2024-11-17 09:36:55.542899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.629 [2024-11-17 09:36:55.542937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.629 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.543077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.543114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.543254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.543290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.543418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.543453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.543591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.543626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.543738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.543774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.543938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.543973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.544078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.544113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.544293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.544343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.544521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.544571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.544696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.544735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.544886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.544922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.545071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.545218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.545376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.545554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.545698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.545856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.545956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.546004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.546167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.546203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.546313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.546351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.546510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.546544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.546663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.546704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.546840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.546876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.546994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.547029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.547196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.547232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.547377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.547415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.547549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.547583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.547717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.547753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.547900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.547951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.548129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.548240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.548277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.548413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.548449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.548561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.548595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.548732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.548767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.548902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.548938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.549068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.549104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.549281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.549317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.549436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.549471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.549599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.549633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.549774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.630 [2024-11-17 09:36:55.549809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.630 qpair failed and we were unable to recover it. 00:36:50.630 [2024-11-17 09:36:55.549970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.550112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.550258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.550455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.550644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.550823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.550963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.550998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.551136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.551171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.551334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.551377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.551522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.551557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.551717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.551768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.551892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.551929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.552068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.552104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.552263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.552299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.552422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.552457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.552623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.552657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.552792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.552827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.552940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.552976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.553076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.553111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.553217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.553252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.553389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.553428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.553569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.553609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.553748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.553784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.553920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.553955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.554097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.554133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.554265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.554300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.554460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.554497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.554624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.554659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.554762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.554798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.554905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.554940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.555087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.555236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.555408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.555545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.555721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.555865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.555988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.556024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.556141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.556177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.556316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.556351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.556540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.556575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.556719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.631 [2024-11-17 09:36:55.556755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.631 qpair failed and we were unable to recover it. 00:36:50.631 [2024-11-17 09:36:55.556869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.556905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.557045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.557081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.557242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.557277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.557414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.557449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.557565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.557599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.557738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.557774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.557934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.557970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.558113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.558149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.558262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.558297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.558462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.558513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.558641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.558679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.558821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.558858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.559023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.559060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.559197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.559405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.559442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.559615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.559650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.559768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.559803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.559967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.560004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.560143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.560178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.560316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.560351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.560493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.560534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.560669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.560704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.560817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.560852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.561030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.561065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.561207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.561242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.561359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.561405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.561522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.561558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.561666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.561702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.561863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.561898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.562002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.562038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.562174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.562210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.562362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.562418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.562566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.562606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.562745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.562782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.562924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.562961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.563096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.563132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.563270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.563306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.563439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.563491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.563632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.563814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.563849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.632 [2024-11-17 09:36:55.563994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.632 [2024-11-17 09:36:55.564030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.632 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.564158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.564195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.564330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.564375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.564482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.564525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.564667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.564703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.564810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.564846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.564957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.564993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.565177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.565226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.565348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.565395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.565560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.565614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.565766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.565805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.565945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.565982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.566087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.566123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.566281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.566317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.566437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.566475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.566590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.566626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.566759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.566794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.566913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.566951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.567138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.567176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.567308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.567356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.567513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.567555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.567688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.567723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.567857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.567892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.568032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.568068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.568206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.568242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.568385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.568424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.568558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.568594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.568705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.568740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.568853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.568889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.569060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.569096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.569201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.569236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.569376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.569413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.569549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.569585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.569749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.569784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.569965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.570001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.570140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.570177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.633 [2024-11-17 09:36:55.570339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.633 [2024-11-17 09:36:55.570381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:36:50.633 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.570504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.570542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.570696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.570746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.570866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.570904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.571029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.571067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.571201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.571238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.571380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.571417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.571559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.571595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.571709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.571745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.571882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.571918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.572057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.572095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.572253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.572289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.572407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.572444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.572587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.572622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.572736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.572772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.572876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.572911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.573052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.573088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.573225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.573261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.573389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.573440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.573603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.573654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.573776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.573927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.573963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.574075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.574111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.574244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.574279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.574446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.574488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.574618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.574655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.574770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.574807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.574972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.575008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.575119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.575155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.575298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.575335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.634 [2024-11-17 09:36:55.575458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.634 [2024-11-17 09:36:55.575495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.634 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.575619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.575658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.575771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.575807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.575918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.575954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.576067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.576103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.576221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.576257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.576394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.576435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.576558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.576594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.576722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.576758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.576880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-11-17 09:36:55.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-11-17 09:36:55.577034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.577070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.577174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.577209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.577330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.577378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.577501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.577550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.577686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.577725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.577837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.577873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.577985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.578141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.578286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.578435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.578586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.578758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.578904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.578948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.579066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.579101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.579234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.579270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.579423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.579460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.579593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.579628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.579741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.579791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.579950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.580004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.580152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.580195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.580320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.580363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.580497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.580540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.580689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.580738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.580847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.580886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.581006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.581048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.581162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.581199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.581314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.581522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.581573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.581739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.581906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.581942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.582062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.582098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.582239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.582274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.582408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.582444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.582551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.582586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.582694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.582729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.582899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.582935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-11-17 09:36:55.583078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-11-17 09:36:55.583117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.583261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.583296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.583450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.583486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.583604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.583642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.583763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.583798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.583928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.583964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.584078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.584113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.584249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.584284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.584401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.584437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.584548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.584584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.584720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.584755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.584865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.584903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.585078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.585226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.585385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.585727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.585880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.585984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.586020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.586197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.586234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.586380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.586417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.586549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.586584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.586697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.586732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.586876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.586912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.587052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.587199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.587381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.587529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.587667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.587839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.587985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.588168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.588237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.898 [2024-11-17 09:36:55.588306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.898 [2024-11-17 09:36:55.588335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.898 [2024-11-17 09:36:55.588339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.588359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.898 [2024-11-17 09:36:55.588388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.898 [2024-11-17 09:36:55.588452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.588618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-11-17 09:36:55.588955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-11-17 09:36:55.588990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.589133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.589168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.589283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.589318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.589423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.589460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.589603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.589638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.589798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.589833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.589969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.590005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.590144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.590179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.590315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.590350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.590541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.590578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.590713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.590748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.590880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.590915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.591056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:50.899 [2024-11-17 09:36:55.591163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.591146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:50.899 [2024-11-17 09:36:55.591196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:50.899 [2024-11-17 09:36:55.591211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:50.899 [2024-11-17 09:36:55.591325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.591358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.591549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.591712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.591862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.591998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.592137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.592309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.592486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.592626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.592777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.592944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.593087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.593123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.593238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.593275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.593422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.593459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.593570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.593605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.593873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.593909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.594048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.594084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.594196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.594232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.594365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.594406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.594510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.594545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.594653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-11-17 09:36:55.594688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-11-17 09:36:55.594827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.594862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.594967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.595003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.595137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.595172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.595291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.595328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.595448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.595485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.595637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.595673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.595841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.595876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.595994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.596031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.596183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.596218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.596338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.596381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.596492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.596527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.596664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.596700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.596838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.596873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.596982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.597126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.597280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.597440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.597588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.597771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.597917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.597953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.598961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.598997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.599131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.599166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.599274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.599309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.599440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.599476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.599585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.599620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.599754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.599789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.599896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.599931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.600067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.600107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.600217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.600252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.600405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.600457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.600609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-11-17 09:36:55.600647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-11-17 09:36:55.600788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.600824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.600935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.600971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.601083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.601119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.601244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.601280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.601388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.601425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.601526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.601562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.601674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.601710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.601853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.601888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.602949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.602985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.603089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.603125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.603251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.603286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.603449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.603485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.603598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.603651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.603797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.603832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.603971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.604125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.604324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.604486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.604639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.604793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.604939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.604976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.605118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.605231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.605267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.605409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.605445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.605554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.605589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-11-17 09:36:55.605723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-11-17 09:36:55.605758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.605870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.605905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.606040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.606076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.606219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.606255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.606398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.606435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.606546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.606587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.606696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.606732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.606868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.606904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.607024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.607059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.607194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.607229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.607338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.607382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.607522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.607558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.607677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.607712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.607855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.607890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.608957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.609120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.609156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.609271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.609307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.609449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.609502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.609629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.609668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.609806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.609842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.609961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.609996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.610116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.610153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.610291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.610326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.610483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.610519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.610623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.610659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.610774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.610811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 A controller has encountered a failure and is being reset. 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-11-17 09:36:55.611110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-11-17 09:36:55.611166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:36:50.902 [2024-11-17 09:36:55.611198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:36:50.902 [2024-11-17 09:36:55.611243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:36:50.902 [2024-11-17 09:36:55.611274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:36:50.902 [2024-11-17 09:36:55.611301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:36:50.902 [2024-11-17 09:36:55.611336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:36:50.902 Unable to reset the controller. 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 Malloc0 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 [2024-11-17 09:36:56.365757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 [2024-11-17 09:36:56.395682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.468 09:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3141587 00:36:52.033 Controller properly reset. 00:36:57.297 Initializing NVMe Controllers 00:36:57.297 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:57.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:57.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:57.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:57.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:57.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:57.297 Initialization complete. Launching workers. 00:36:57.297 Starting thread on core 1 00:36:57.297 Starting thread on core 2 00:36:57.297 Starting thread on core 3 00:36:57.297 Starting thread on core 0 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:57.297 00:36:57.297 real 0m11.589s 00:36:57.297 user 0m36.573s 00:36:57.297 sys 0m7.479s 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.297 ************************************ 00:36:57.297 END TEST nvmf_target_disconnect_tc2 00:36:57.297 ************************************ 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.297 rmmod nvme_tcp 00:36:57.297 rmmod nvme_fabrics 00:36:57.297 rmmod nvme_keyring 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3142076 ']' 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3142076 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3142076 ']' 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3142076 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3142076 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3142076' 00:36:57.297 killing process with pid 3142076 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3142076 00:36:57.297 09:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3142076 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.864 09:37:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.398 09:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:00.398 00:37:00.398 real 0m17.700s 00:37:00.398 user 1m4.825s 00:37:00.398 sys 0m10.157s 00:37:00.398 09:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.398 09:37:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:00.398 ************************************ 00:37:00.398 END TEST nvmf_target_disconnect 00:37:00.398 ************************************ 00:37:00.398 09:37:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:00.398 00:37:00.398 real 7m40.036s 00:37:00.398 user 19m54.320s 00:37:00.398 sys 1m33.388s 00:37:00.398 09:37:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.398 09:37:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.398 ************************************ 00:37:00.398 END TEST nvmf_host 00:37:00.398 ************************************ 00:37:00.398 09:37:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:00.398 09:37:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:00.398 09:37:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:00.398 09:37:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:00.398 09:37:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.398 09:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:00.398 ************************************ 00:37:00.398 START TEST nvmf_target_core_interrupt_mode 00:37:00.398 ************************************ 00:37:00.398 09:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:00.398 * Looking for test storage... 00:37:00.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.398 --rc genhtml_branch_coverage=1 00:37:00.398 --rc genhtml_function_coverage=1 00:37:00.398 --rc genhtml_legend=1 00:37:00.398 --rc geninfo_all_blocks=1 00:37:00.398 --rc geninfo_unexecuted_blocks=1 00:37:00.398 00:37:00.398 ' 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.398 --rc genhtml_branch_coverage=1 00:37:00.398 --rc genhtml_function_coverage=1 00:37:00.398 --rc genhtml_legend=1 00:37:00.398 --rc geninfo_all_blocks=1 00:37:00.398 --rc geninfo_unexecuted_blocks=1 00:37:00.398 00:37:00.398 ' 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.398 --rc genhtml_branch_coverage=1 00:37:00.398 --rc genhtml_function_coverage=1 00:37:00.398 --rc genhtml_legend=1 00:37:00.398 --rc geninfo_all_blocks=1 00:37:00.398 --rc geninfo_unexecuted_blocks=1 00:37:00.398 00:37:00.398 ' 00:37:00.398 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:00.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.398 --rc genhtml_branch_coverage=1 00:37:00.398 --rc genhtml_function_coverage=1 00:37:00.398 --rc genhtml_legend=1 00:37:00.398 --rc geninfo_all_blocks=1 00:37:00.398 --rc geninfo_unexecuted_blocks=1 00:37:00.398 00:37:00.398 ' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:00.399 ************************************ 00:37:00.399 START TEST nvmf_abort 00:37:00.399 ************************************ 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:00.399 * Looking for test storage... 00:37:00.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:00.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.399 --rc genhtml_branch_coverage=1 00:37:00.399 --rc genhtml_function_coverage=1 00:37:00.399 --rc genhtml_legend=1 00:37:00.399 --rc geninfo_all_blocks=1 00:37:00.399 --rc geninfo_unexecuted_blocks=1 00:37:00.399 00:37:00.399 ' 00:37:00.399 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:00.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.399 --rc genhtml_branch_coverage=1 00:37:00.399 --rc genhtml_function_coverage=1 00:37:00.399 --rc genhtml_legend=1 00:37:00.399 --rc geninfo_all_blocks=1 00:37:00.399 --rc geninfo_unexecuted_blocks=1 00:37:00.400 00:37:00.400 ' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:00.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.400 --rc genhtml_branch_coverage=1 00:37:00.400 --rc genhtml_function_coverage=1 00:37:00.400 --rc genhtml_legend=1 00:37:00.400 --rc geninfo_all_blocks=1 00:37:00.400 --rc geninfo_unexecuted_blocks=1 00:37:00.400 00:37:00.400 ' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:00.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.400 --rc genhtml_branch_coverage=1 00:37:00.400 --rc genhtml_function_coverage=1 00:37:00.400 --rc genhtml_legend=1 00:37:00.400 --rc geninfo_all_blocks=1 00:37:00.400 --rc geninfo_unexecuted_blocks=1 00:37:00.400 00:37:00.400 ' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:00.400 09:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.931 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:02.932 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:02.932 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:02.932 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:02.932 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:02.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:37:02.932 00:37:02.932 --- 10.0.0.2 ping statistics --- 00:37:02.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.932 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:37:02.932 00:37:02.932 --- 10.0.0.1 ping statistics --- 00:37:02.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.932 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3145017 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3145017 00:37:02.932 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3145017 ']' 00:37:02.933 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.933 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.933 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.933 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.933 09:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.933 [2024-11-17 09:37:07.616466] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:02.933 [2024-11-17 09:37:07.620236] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:02.933 [2024-11-17 09:37:07.620379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.933 [2024-11-17 09:37:07.793612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:02.933 [2024-11-17 09:37:07.933013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.933 [2024-11-17 09:37:07.933094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.933 [2024-11-17 09:37:07.933123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.933 [2024-11-17 09:37:07.933145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.933 [2024-11-17 09:37:07.933167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.933 [2024-11-17 09:37:07.935768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.933 [2024-11-17 09:37:07.935856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.933 [2024-11-17 09:37:07.935864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:03.509 [2024-11-17 09:37:08.300700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:03.509 [2024-11-17 09:37:08.301834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:03.509 [2024-11-17 09:37:08.302618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:03.509 [2024-11-17 09:37:08.302988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 [2024-11-17 09:37:08.568955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 Malloc0 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 Delay0 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 [2024-11-17 09:37:08.701121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.768 09:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:04.026 [2024-11-17 09:37:08.894519] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:06.556 Initializing NVMe Controllers 00:37:06.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:06.556 controller IO queue size 128 less than required 00:37:06.556 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:06.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:06.556 Initialization complete. Launching workers. 00:37:06.556 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22819 00:37:06.556 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22876, failed to submit 66 00:37:06.556 success 22819, unsuccessful 57, failed 0 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:06.556 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:06.556 rmmod nvme_tcp 00:37:06.556 rmmod nvme_fabrics 00:37:06.556 rmmod nvme_keyring 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3145017 ']' 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3145017 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3145017 ']' 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3145017 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145017 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145017' 00:37:06.557 killing process with pid 3145017 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3145017 00:37:06.557 09:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3145017 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.492 09:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:10.024 00:37:10.024 real 0m9.360s 00:37:10.024 user 0m11.602s 00:37:10.024 sys 0m3.165s 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 ************************************ 00:37:10.024 END TEST nvmf_abort 00:37:10.024 ************************************ 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:10.024 ************************************ 00:37:10.024 START TEST nvmf_ns_hotplug_stress 00:37:10.024 ************************************ 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:10.024 * Looking for test storage... 00:37:10.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.024 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:10.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.025 --rc genhtml_branch_coverage=1 00:37:10.025 --rc genhtml_function_coverage=1 00:37:10.025 --rc genhtml_legend=1 00:37:10.025 --rc geninfo_all_blocks=1 00:37:10.025 --rc geninfo_unexecuted_blocks=1 00:37:10.025 00:37:10.025 ' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:10.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.025 --rc genhtml_branch_coverage=1 00:37:10.025 --rc genhtml_function_coverage=1 00:37:10.025 --rc genhtml_legend=1 00:37:10.025 --rc geninfo_all_blocks=1 00:37:10.025 --rc geninfo_unexecuted_blocks=1 00:37:10.025 00:37:10.025 ' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:10.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.025 --rc genhtml_branch_coverage=1 00:37:10.025 --rc genhtml_function_coverage=1 00:37:10.025 --rc genhtml_legend=1 00:37:10.025 --rc geninfo_all_blocks=1 00:37:10.025 --rc geninfo_unexecuted_blocks=1 00:37:10.025 00:37:10.025 ' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:10.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.025 --rc genhtml_branch_coverage=1 00:37:10.025 --rc genhtml_function_coverage=1 00:37:10.025 --rc genhtml_legend=1 00:37:10.025 --rc geninfo_all_blocks=1 00:37:10.025 --rc geninfo_unexecuted_blocks=1 00:37:10.025 00:37:10.025 ' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:10.025 09:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:11.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:11.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:11.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:11.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.928 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:11.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:37:11.929 00:37:11.929 --- 10.0.0.2 ping statistics --- 00:37:11.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.929 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:37:11.929 00:37:11.929 --- 10.0.0.1 ping statistics --- 00:37:11.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.929 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3147497 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3147497 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3147497 ']' 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.929 09:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:11.929 [2024-11-17 09:37:16.812613] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:11.929 [2024-11-17 09:37:16.815350] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:37:11.929 [2024-11-17 09:37:16.815473] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.187 [2024-11-17 09:37:16.972977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:12.187 [2024-11-17 09:37:17.112784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.187 [2024-11-17 09:37:17.112859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.187 [2024-11-17 09:37:17.112888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.187 [2024-11-17 09:37:17.112910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.187 [2024-11-17 09:37:17.112935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.187 [2024-11-17 09:37:17.115592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.187 [2024-11-17 09:37:17.115692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.187 [2024-11-17 09:37:17.115700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:12.753 [2024-11-17 09:37:17.481183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:12.753 [2024-11-17 09:37:17.482317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:12.753 [2024-11-17 09:37:17.483139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:12.753 [2024-11-17 09:37:17.483507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:13.011 09:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:13.270 [2024-11-17 09:37:18.092828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.270 09:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:13.528 09:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.786 [2024-11-17 09:37:18.657384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.786 09:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:14.044 09:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:14.609 Malloc0 00:37:14.609 09:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:14.609 Delay0 00:37:14.866 09:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.124 09:37:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:15.382 NULL1 00:37:15.382 09:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:15.640 09:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3147931 00:37:15.640 09:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:15.640 09:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:15.640 09:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.897 09:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.155 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:16.155 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:16.412 true 00:37:16.412 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:16.412 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.669 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.927 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:16.927 09:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:17.493 true 00:37:17.493 09:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:17.493 09:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.493 09:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.751 09:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:17.751 09:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:18.009 true 00:37:18.009 09:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:18.009 09:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.380 09:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.380 Read completed with error (sct=0, sc=11) 00:37:19.380 09:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:19.380 09:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:19.637 true 00:37:19.637 09:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:19.637 09:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.895 09:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.153 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:20.153 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:20.411 true 00:37:20.411 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:20.411 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.668 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.925 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:20.926 09:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:21.184 true 00:37:21.184 09:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:21.184 09:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.116 09:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.374 09:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:22.374 09:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:22.631 true 00:37:22.632 09:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:22.632 09:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.197 09:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.197 09:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:23.197 09:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:23.455 true 00:37:23.455 09:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:23.455 09:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.392 09:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.652 09:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:24.652 09:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:24.910 true 00:37:24.910 09:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:24.910 09:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.167 09:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.426 09:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:25.426 09:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:25.684 true 00:37:25.684 09:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:25.684 09:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.619 09:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.877 09:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:26.877 09:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:27.135 true 00:37:27.135 09:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:27.135 09:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.393 09:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.651 09:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:27.651 09:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:27.909 true 00:37:27.909 09:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:27.909 09:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.166 09:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.423 09:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:28.423 09:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:28.681 true 00:37:28.681 09:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:28.681 09:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.614 09:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.872 09:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:29.872 09:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:30.129 true 00:37:30.129 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:30.129 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.387 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.680 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:30.680 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:30.991 true 00:37:30.991 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:30.991 09:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.248 09:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.506 09:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:31.506 09:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:31.764 true 00:37:31.764 09:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:31.764 09:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.696 09:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.954 09:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:32.954 09:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:33.211 true 00:37:33.211 09:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:33.211 09:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.469 09:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:33.727 09:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:33.727 09:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:33.984 true 00:37:34.241 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:34.241 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.499 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.756 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:34.756 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:35.013 true 00:37:35.013 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:35.013 09:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.946 09:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:35.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.202 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:36.202 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:36.460 true 00:37:36.460 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:36.460 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.717 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.975 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:36.975 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:37.232 true 00:37:37.232 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:37.232 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.165 09:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.422 09:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:38.422 09:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:38.680 true 00:37:38.680 09:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:38.680 09:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.938 09:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:39.196 09:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:39.196 09:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:39.454 true 00:37:39.454 09:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:39.454 09:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.388 09:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.646 09:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:40.646 09:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:40.903 true 00:37:40.903 09:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:40.903 09:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.161 09:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.418 09:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:41.418 09:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:41.676 true 00:37:41.676 09:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:41.676 09:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.933 09:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:42.191 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:42.191 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:42.449 true 00:37:42.449 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:42.449 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.382 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:43.640 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:43.640 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:43.897 true 00:37:43.897 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:43.897 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.155 09:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.412 09:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:44.412 09:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:44.668 true 00:37:44.668 09:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:44.668 09:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.926 09:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:45.491 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:45.491 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:45.492 true 00:37:45.492 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:45.492 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.425 Initializing NVMe Controllers 00:37:46.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:46.425 Controller IO queue size 128, less than required. 00:37:46.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:46.425 Controller IO queue size 128, less than required. 00:37:46.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:46.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:46.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:46.425 Initialization complete. Launching workers. 00:37:46.425 ======================================================== 00:37:46.425 Latency(us) 00:37:46.425 Device Information : IOPS MiB/s Average min max 00:37:46.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 422.73 0.21 123412.71 4098.68 1067229.71 00:37:46.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6388.03 3.12 19974.39 3040.79 480178.50 00:37:46.425 ======================================================== 00:37:46.425 Total : 6810.77 3.33 26394.64 3040.79 1067229.71 00:37:46.425 00:37:46.682 09:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:46.940 09:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:46.940 09:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:47.198 true 00:37:47.198 09:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3147931 00:37:47.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3147931) - No such process 00:37:47.198 09:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3147931 00:37:47.198 09:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.456 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.714 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:47.714 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:47.714 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:47.714 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:47.714 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:47.972 null0 00:37:47.972 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:47.972 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:47.972 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:48.230 null1 00:37:48.230 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:48.230 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:48.230 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:48.489 null2 00:37:48.489 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:48.489 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:48.489 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:48.747 null3 00:37:48.747 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:48.747 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:48.747 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:49.006 null4 00:37:49.006 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:49.006 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:49.006 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:49.264 null5 00:37:49.264 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:49.264 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:49.264 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:49.522 null6 00:37:49.522 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:49.522 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:49.522 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:49.780 null7 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.780 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3152000 3152001 3152003 3152004 3152007 3152011 3152014 3152016 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:49.781 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:50.039 09:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:50.297 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.297 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.297 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:50.297 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:50.298 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:50.556 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:50.556 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:50.814 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:50.814 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:50.814 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.814 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:50.814 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:50.814 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.072 09:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:51.331 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.589 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.590 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:51.590 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:51.590 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:51.590 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:51.848 09:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.106 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:52.365 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:52.929 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:53.186 09:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.444 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:53.701 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.959 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:54.217 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.473 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.038 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.296 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.554 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:55.812 rmmod nvme_tcp 00:37:55.812 rmmod nvme_fabrics 00:37:55.812 rmmod nvme_keyring 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:55.812 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3147497 ']' 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3147497 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3147497 ']' 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3147497 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3147497 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3147497' 00:37:55.813 killing process with pid 3147497 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3147497 00:37:55.813 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3147497 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.186 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:59.132 00:37:59.132 real 0m49.561s 00:37:59.132 user 3m21.477s 00:37:59.132 sys 0m22.905s 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:59.132 ************************************ 00:37:59.132 END TEST nvmf_ns_hotplug_stress 00:37:59.132 ************************************ 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:59.132 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:59.390 ************************************ 00:37:59.390 START TEST nvmf_delete_subsystem 00:37:59.390 ************************************ 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:59.391 * Looking for test storage... 00:37:59.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.391 --rc genhtml_branch_coverage=1 00:37:59.391 --rc genhtml_function_coverage=1 00:37:59.391 --rc genhtml_legend=1 00:37:59.391 --rc geninfo_all_blocks=1 00:37:59.391 --rc geninfo_unexecuted_blocks=1 00:37:59.391 00:37:59.391 ' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.391 --rc genhtml_branch_coverage=1 00:37:59.391 --rc genhtml_function_coverage=1 00:37:59.391 --rc genhtml_legend=1 00:37:59.391 --rc geninfo_all_blocks=1 00:37:59.391 --rc geninfo_unexecuted_blocks=1 00:37:59.391 00:37:59.391 ' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.391 --rc genhtml_branch_coverage=1 00:37:59.391 --rc genhtml_function_coverage=1 00:37:59.391 --rc genhtml_legend=1 00:37:59.391 --rc geninfo_all_blocks=1 00:37:59.391 --rc geninfo_unexecuted_blocks=1 00:37:59.391 00:37:59.391 ' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.391 --rc genhtml_branch_coverage=1 00:37:59.391 --rc genhtml_function_coverage=1 00:37:59.391 --rc genhtml_legend=1 00:37:59.391 --rc geninfo_all_blocks=1 00:37:59.391 --rc geninfo_unexecuted_blocks=1 00:37:59.391 00:37:59.391 ' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:59.391 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:59.392 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:01.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:01.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:01.923 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:01.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:01.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:01.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:38:01.924 00:38:01.924 --- 10.0.0.2 ping statistics --- 00:38:01.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.924 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:01.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:38:01.924 00:38:01.924 --- 10.0.0.1 ping statistics --- 00:38:01.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.924 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3154945 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3154945 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3154945 ']' 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.924 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:01.924 [2024-11-17 09:38:06.574778] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:01.924 [2024-11-17 09:38:06.577135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:01.924 [2024-11-17 09:38:06.577226] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.924 [2024-11-17 09:38:06.723027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:01.924 [2024-11-17 09:38:06.858769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:01.924 [2024-11-17 09:38:06.858858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:01.924 [2024-11-17 09:38:06.858888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:01.925 [2024-11-17 09:38:06.858909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:01.925 [2024-11-17 09:38:06.858940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:01.925 [2024-11-17 09:38:06.861477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.925 [2024-11-17 09:38:06.861484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.493 [2024-11-17 09:38:07.225696] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:02.493 [2024-11-17 09:38:07.226443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:02.493 [2024-11-17 09:38:07.226792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 [2024-11-17 09:38:07.586564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 [2024-11-17 09:38:07.606872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 NULL1 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 Delay0 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3155098 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:02.752 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:02.752 [2024-11-17 09:38:07.733073] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:04.651 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:04.651 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.651 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 [2024-11-17 09:38:09.839185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 Read completed with error (sct=0, sc=8) 00:38:04.910 starting I/O failed: -6 00:38:04.910 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 starting I/O failed: -6 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 starting I/O failed: -6 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 starting I/O failed: -6 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 starting I/O failed: -6 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 starting I/O failed: -6 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 starting I/O failed: -6 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 [2024-11-17 09:38:09.840746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:04.911 Write completed with error (sct=0, sc=8) 00:38:04.911 Read completed with error (sct=0, sc=8) 00:38:05.846 [2024-11-17 09:38:10.806719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 [2024-11-17 09:38:10.843566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 [2024-11-17 09:38:10.844432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Write completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.846 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 [2024-11-17 09:38:10.845400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Write completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 Read completed with error (sct=0, sc=8) 00:38:05.847 [2024-11-17 09:38:10.850270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:05.847 09:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.847 09:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:05.847 09:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3155098 00:38:05.847 Initializing NVMe Controllers 00:38:05.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:05.847 Controller IO queue size 128, less than required. 00:38:05.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:05.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:05.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:05.847 Initialization complete. Launching workers. 00:38:05.847 ======================================================== 00:38:05.847 Latency(us) 00:38:05.847 Device Information : IOPS MiB/s Average min max 00:38:05.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.98 0.08 908551.89 912.22 1018712.12 00:38:05.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.38 0.09 883849.53 764.50 1015140.15 00:38:05.847 ======================================================== 00:38:05.847 Total : 341.36 0.17 895788.41 764.50 1018712.12 00:38:05.847 00:38:05.847 09:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:05.847 [2024-11-17 09:38:10.851920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:05.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3155098 00:38:06.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3155098) - No such process 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3155098 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3155098 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3155098 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:06.414 [2024-11-17 09:38:11.378796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3155504 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:06.414 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:06.673 [2024-11-17 09:38:11.494574] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:06.932 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:06.932 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:06.932 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:07.498 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:07.498 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:07.498 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:08.064 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:08.064 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:08.064 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:08.630 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:08.630 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:08.630 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:09.195 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:09.195 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:09.195 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:09.454 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:09.454 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:09.454 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:09.712 Initializing NVMe Controllers 00:38:09.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:09.712 Controller IO queue size 128, less than required. 00:38:09.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:09.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:09.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:09.712 Initialization complete. Launching workers. 00:38:09.712 ======================================================== 00:38:09.712 Latency(us) 00:38:09.712 Device Information : IOPS MiB/s Average min max 00:38:09.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006482.56 1000212.51 1045757.73 00:38:09.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005595.73 1000258.46 1020001.56 00:38:09.712 ======================================================== 00:38:09.712 Total : 256.00 0.12 1006039.14 1000212.51 1045757.73 00:38:09.712 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3155504 00:38:09.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3155504) - No such process 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3155504 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:09.970 rmmod nvme_tcp 00:38:09.970 rmmod nvme_fabrics 00:38:09.970 rmmod nvme_keyring 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3154945 ']' 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3154945 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3154945 ']' 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3154945 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.970 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154945 00:38:10.228 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:10.228 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:10.228 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154945' 00:38:10.228 killing process with pid 3154945 00:38:10.228 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3154945 00:38:10.228 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3154945 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:11.163 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:11.421 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:11.421 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:11.421 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.421 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:11.421 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:13.321 00:38:13.321 real 0m14.050s 00:38:13.321 user 0m26.172s 00:38:13.321 sys 0m3.958s 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:13.321 ************************************ 00:38:13.321 END TEST nvmf_delete_subsystem 00:38:13.321 ************************************ 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:13.321 ************************************ 00:38:13.321 START TEST nvmf_host_management 00:38:13.321 ************************************ 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:13.321 * Looking for test storage... 00:38:13.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:38:13.321 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.580 --rc genhtml_branch_coverage=1 00:38:13.580 --rc genhtml_function_coverage=1 00:38:13.580 --rc genhtml_legend=1 00:38:13.580 --rc geninfo_all_blocks=1 00:38:13.580 --rc geninfo_unexecuted_blocks=1 00:38:13.580 00:38:13.580 ' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.580 --rc genhtml_branch_coverage=1 00:38:13.580 --rc genhtml_function_coverage=1 00:38:13.580 --rc genhtml_legend=1 00:38:13.580 --rc geninfo_all_blocks=1 00:38:13.580 --rc geninfo_unexecuted_blocks=1 00:38:13.580 00:38:13.580 ' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.580 --rc genhtml_branch_coverage=1 00:38:13.580 --rc genhtml_function_coverage=1 00:38:13.580 --rc genhtml_legend=1 00:38:13.580 --rc geninfo_all_blocks=1 00:38:13.580 --rc geninfo_unexecuted_blocks=1 00:38:13.580 00:38:13.580 ' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:13.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.580 --rc genhtml_branch_coverage=1 00:38:13.580 --rc genhtml_function_coverage=1 00:38:13.580 --rc genhtml_legend=1 00:38:13.580 --rc geninfo_all_blocks=1 00:38:13.580 --rc geninfo_unexecuted_blocks=1 00:38:13.580 00:38:13.580 ' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:13.580 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:13.581 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:15.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:15.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:15.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:15.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:15.480 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:15.737 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:15.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:15.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:38:15.738 00:38:15.738 --- 10.0.0.2 ping statistics --- 00:38:15.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.738 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:15.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:15.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:38:15.738 00:38:15.738 --- 10.0.0.1 ping statistics --- 00:38:15.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.738 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3157966 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3157966 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3157966 ']' 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:15.738 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:15.738 [2024-11-17 09:38:20.728517] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:15.738 [2024-11-17 09:38:20.731207] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:15.738 [2024-11-17 09:38:20.731323] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:15.996 [2024-11-17 09:38:20.888429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:16.254 [2024-11-17 09:38:21.028487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.254 [2024-11-17 09:38:21.028554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.254 [2024-11-17 09:38:21.028590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.254 [2024-11-17 09:38:21.028612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.254 [2024-11-17 09:38:21.028634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.254 [2024-11-17 09:38:21.031430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.254 [2024-11-17 09:38:21.031464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:16.254 [2024-11-17 09:38:21.031500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:16.254 [2024-11-17 09:38:21.031489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.513 [2024-11-17 09:38:21.394051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:16.513 [2024-11-17 09:38:21.406690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:16.513 [2024-11-17 09:38:21.406905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:16.513 [2024-11-17 09:38:21.407737] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:16.513 [2024-11-17 09:38:21.408110] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:16.772 [2024-11-17 09:38:21.708695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.772 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:17.030 Malloc0 00:38:17.030 [2024-11-17 09:38:21.828863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3158146 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3158146 /var/tmp/bdevperf.sock 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3158146 ']' 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:17.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:17.030 { 00:38:17.030 "params": { 00:38:17.030 "name": "Nvme$subsystem", 00:38:17.030 "trtype": "$TEST_TRANSPORT", 00:38:17.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:17.030 "adrfam": "ipv4", 00:38:17.030 "trsvcid": "$NVMF_PORT", 00:38:17.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:17.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:17.030 "hdgst": ${hdgst:-false}, 00:38:17.030 "ddgst": ${ddgst:-false} 00:38:17.030 }, 00:38:17.030 "method": "bdev_nvme_attach_controller" 00:38:17.030 } 00:38:17.030 EOF 00:38:17.030 )") 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:17.030 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:17.030 "params": { 00:38:17.030 "name": "Nvme0", 00:38:17.030 "trtype": "tcp", 00:38:17.030 "traddr": "10.0.0.2", 00:38:17.030 "adrfam": "ipv4", 00:38:17.030 "trsvcid": "4420", 00:38:17.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:17.030 "hdgst": false, 00:38:17.030 "ddgst": false 00:38:17.030 }, 00:38:17.030 "method": "bdev_nvme_attach_controller" 00:38:17.030 }' 00:38:17.030 [2024-11-17 09:38:21.946918] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:17.030 [2024-11-17 09:38:21.947060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158146 ] 00:38:17.288 [2024-11-17 09:38:22.089386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.288 [2024-11-17 09:38:22.218905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.855 Running I/O for 10 seconds... 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:18.114 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=378 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 378 -ge 100 ']' 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.115 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:18.115 [2024-11-17 09:38:22.973095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.973969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.973994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.115 [2024-11-17 09:38:22.974818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.115 [2024-11-17 09:38:22.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.974874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.974895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.974919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.974940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.974963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.974985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.975958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.975979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.116 [2024-11-17 09:38:22.976375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:38:18.116 [2024-11-17 09:38:22.976796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:18.116 [2024-11-17 09:38:22.976827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:18.116 [2024-11-17 09:38:22.976873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:18.116 [2024-11-17 09:38:22.976915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.116 id:0 cdw10:00000000 cdw11:00000000 00:38:18.116 [2024-11-17 09:38:22.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:18.116 [2024-11-17 09:38:22.976981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:38:18.116 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:18.116 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.116 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:18.116 [2024-11-17 09:38:22.978261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:18.116 task offset: 57088 on job bdev=Nvme0n1 fails 00:38:18.116 00:38:18.116 Latency(us) 00:38:18.116 [2024-11-17T08:38:23.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.116 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:18.117 Job: Nvme0n1 ended in about 0.33 seconds with error 00:38:18.117 Verification LBA range: start 0x0 length 0x400 00:38:18.117 Nvme0n1 : 0.33 1176.11 73.51 196.02 0.00 44962.38 4563.25 42913.94 00:38:18.117 [2024-11-17T08:38:23.130Z] =================================================================================================================== 00:38:18.117 [2024-11-17T08:38:23.130Z] Total : 1176.11 73.51 196.02 0.00 44962.38 4563.25 42913.94 00:38:18.117 [2024-11-17 09:38:22.983306] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:18.117 [2024-11-17 09:38:22.983389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:38:18.117 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.117 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:18.117 [2024-11-17 09:38:23.030443] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3158146 00:38:19.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3158146) - No such process 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:19.052 { 00:38:19.052 "params": { 00:38:19.052 "name": "Nvme$subsystem", 00:38:19.052 "trtype": "$TEST_TRANSPORT", 00:38:19.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.052 "adrfam": "ipv4", 00:38:19.052 "trsvcid": "$NVMF_PORT", 00:38:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.052 "hdgst": ${hdgst:-false}, 00:38:19.052 "ddgst": ${ddgst:-false} 00:38:19.052 }, 00:38:19.052 "method": "bdev_nvme_attach_controller" 00:38:19.052 } 00:38:19.052 EOF 00:38:19.052 )") 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:19.052 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:19.052 "params": { 00:38:19.052 "name": "Nvme0", 00:38:19.052 "trtype": "tcp", 00:38:19.052 "traddr": "10.0.0.2", 00:38:19.052 "adrfam": "ipv4", 00:38:19.052 "trsvcid": "4420", 00:38:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.052 "hdgst": false, 00:38:19.052 "ddgst": false 00:38:19.052 }, 00:38:19.052 "method": "bdev_nvme_attach_controller" 00:38:19.052 }' 00:38:19.310 [2024-11-17 09:38:24.070441] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:19.310 [2024-11-17 09:38:24.070583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158416 ] 00:38:19.310 [2024-11-17 09:38:24.214997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.568 [2024-11-17 09:38:24.345538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.826 Running I/O for 1 seconds... 00:38:21.201 1280.00 IOPS, 80.00 MiB/s 00:38:21.201 Latency(us) 00:38:21.201 [2024-11-17T08:38:26.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.201 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:21.201 Verification LBA range: start 0x0 length 0x400 00:38:21.201 Nvme0n1 : 1.01 1336.70 83.54 0.00 0.00 47072.77 9951.76 42137.22 00:38:21.201 [2024-11-17T08:38:26.214Z] =================================================================================================================== 00:38:21.201 [2024-11-17T08:38:26.214Z] Total : 1336.70 83.54 0.00 0.00 47072.77 9951.76 42137.22 00:38:21.767 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:21.767 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:21.767 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:21.768 rmmod nvme_tcp 00:38:21.768 rmmod nvme_fabrics 00:38:21.768 rmmod nvme_keyring 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3157966 ']' 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3157966 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3157966 ']' 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3157966 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157966 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157966' 00:38:21.768 killing process with pid 3157966 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3157966 00:38:21.768 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3157966 00:38:23.143 [2024-11-17 09:38:27.912862] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.143 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:25.073 00:38:25.073 real 0m11.783s 00:38:25.073 user 0m25.423s 00:38:25.073 sys 0m4.552s 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.073 ************************************ 00:38:25.073 END TEST nvmf_host_management 00:38:25.073 ************************************ 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:25.073 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:25.332 ************************************ 00:38:25.332 START TEST nvmf_lvol 00:38:25.332 ************************************ 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:25.332 * Looking for test storage... 00:38:25.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.332 --rc genhtml_branch_coverage=1 00:38:25.332 --rc genhtml_function_coverage=1 00:38:25.332 --rc genhtml_legend=1 00:38:25.332 --rc geninfo_all_blocks=1 00:38:25.332 --rc geninfo_unexecuted_blocks=1 00:38:25.332 00:38:25.332 ' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.332 --rc genhtml_branch_coverage=1 00:38:25.332 --rc genhtml_function_coverage=1 00:38:25.332 --rc genhtml_legend=1 00:38:25.332 --rc geninfo_all_blocks=1 00:38:25.332 --rc geninfo_unexecuted_blocks=1 00:38:25.332 00:38:25.332 ' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.332 --rc genhtml_branch_coverage=1 00:38:25.332 --rc genhtml_function_coverage=1 00:38:25.332 --rc genhtml_legend=1 00:38:25.332 --rc geninfo_all_blocks=1 00:38:25.332 --rc geninfo_unexecuted_blocks=1 00:38:25.332 00:38:25.332 ' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.332 --rc genhtml_branch_coverage=1 00:38:25.332 --rc genhtml_function_coverage=1 00:38:25.332 --rc genhtml_legend=1 00:38:25.332 --rc geninfo_all_blocks=1 00:38:25.332 --rc geninfo_unexecuted_blocks=1 00:38:25.332 00:38:25.332 ' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:25.332 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:25.333 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:27.235 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:27.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:27.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:27.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:27.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.236 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:27.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:38:27.495 00:38:27.495 --- 10.0.0.2 ping statistics --- 00:38:27.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.495 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:38:27.495 00:38:27.495 --- 10.0.0.1 ping statistics --- 00:38:27.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.495 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3160875 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3160875 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3160875 ']' 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.495 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:27.753 [2024-11-17 09:38:32.538625] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:27.754 [2024-11-17 09:38:32.541154] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:27.754 [2024-11-17 09:38:32.541255] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.754 [2024-11-17 09:38:32.681710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:28.012 [2024-11-17 09:38:32.804171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.012 [2024-11-17 09:38:32.804239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.012 [2024-11-17 09:38:32.804263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.012 [2024-11-17 09:38:32.804281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.012 [2024-11-17 09:38:32.804298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.012 [2024-11-17 09:38:32.806652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.012 [2024-11-17 09:38:32.806690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.012 [2024-11-17 09:38:32.806699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.271 [2024-11-17 09:38:33.128979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:28.271 [2024-11-17 09:38:33.129972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:28.271 [2024-11-17 09:38:33.130711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:28.271 [2024-11-17 09:38:33.130988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.530 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:28.788 [2024-11-17 09:38:33.775734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.045 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:29.304 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:29.304 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:29.562 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:29.562 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:29.820 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:30.077 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9fbeb9d7-8801-4129-988c-b0a08fe82baa 00:38:30.077 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9fbeb9d7-8801-4129-988c-b0a08fe82baa lvol 20 00:38:30.335 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d9ace514-19df-4a2c-805d-677c4937cf55 00:38:30.335 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:30.901 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d9ace514-19df-4a2c-805d-677c4937cf55 00:38:30.901 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:31.158 [2024-11-17 09:38:36.127856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.158 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:31.416 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3161305 00:38:31.416 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:31.416 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:32.790 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d9ace514-19df-4a2c-805d-677c4937cf55 MY_SNAPSHOT 00:38:32.790 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ed0bdccf-6522-459b-a7d8-89ca6a7f511e 00:38:32.790 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d9ace514-19df-4a2c-805d-677c4937cf55 30 00:38:33.356 09:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ed0bdccf-6522-459b-a7d8-89ca6a7f511e MY_CLONE 00:38:33.615 09:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=48ce58bf-327b-4203-a225-10c4019b6f3f 00:38:33.615 09:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 48ce58bf-327b-4203-a225-10c4019b6f3f 00:38:34.181 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3161305 00:38:42.293 Initializing NVMe Controllers 00:38:42.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:42.294 Controller IO queue size 128, less than required. 00:38:42.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:42.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:42.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:42.294 Initialization complete. Launching workers. 00:38:42.294 ======================================================== 00:38:42.294 Latency(us) 00:38:42.294 Device Information : IOPS MiB/s Average min max 00:38:42.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8305.50 32.44 15417.56 467.97 187128.14 00:38:42.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8058.80 31.48 15887.25 5298.89 214344.30 00:38:42.294 ======================================================== 00:38:42.294 Total : 16364.30 63.92 15648.86 467.97 214344.30 00:38:42.294 00:38:42.294 09:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:42.294 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d9ace514-19df-4a2c-805d-677c4937cf55 00:38:42.552 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fbeb9d7-8801-4129-988c-b0a08fe82baa 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:42.810 rmmod nvme_tcp 00:38:42.810 rmmod nvme_fabrics 00:38:42.810 rmmod nvme_keyring 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3160875 ']' 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3160875 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3160875 ']' 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3160875 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160875 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160875' 00:38:42.810 killing process with pid 3160875 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3160875 00:38:42.810 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3160875 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:44.713 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:46.615 00:38:46.615 real 0m21.228s 00:38:46.615 user 0m58.241s 00:38:46.615 sys 0m7.535s 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:46.615 ************************************ 00:38:46.615 END TEST nvmf_lvol 00:38:46.615 ************************************ 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:46.615 ************************************ 00:38:46.615 START TEST nvmf_lvs_grow 00:38:46.615 ************************************ 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:46.615 * Looking for test storage... 00:38:46.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.615 --rc genhtml_branch_coverage=1 00:38:46.615 --rc genhtml_function_coverage=1 00:38:46.615 --rc genhtml_legend=1 00:38:46.615 --rc geninfo_all_blocks=1 00:38:46.615 --rc geninfo_unexecuted_blocks=1 00:38:46.615 00:38:46.615 ' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.615 --rc genhtml_branch_coverage=1 00:38:46.615 --rc genhtml_function_coverage=1 00:38:46.615 --rc genhtml_legend=1 00:38:46.615 --rc geninfo_all_blocks=1 00:38:46.615 --rc geninfo_unexecuted_blocks=1 00:38:46.615 00:38:46.615 ' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.615 --rc genhtml_branch_coverage=1 00:38:46.615 --rc genhtml_function_coverage=1 00:38:46.615 --rc genhtml_legend=1 00:38:46.615 --rc geninfo_all_blocks=1 00:38:46.615 --rc geninfo_unexecuted_blocks=1 00:38:46.615 00:38:46.615 ' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.615 --rc genhtml_branch_coverage=1 00:38:46.615 --rc genhtml_function_coverage=1 00:38:46.615 --rc genhtml_legend=1 00:38:46.615 --rc geninfo_all_blocks=1 00:38:46.615 --rc geninfo_unexecuted_blocks=1 00:38:46.615 00:38:46.615 ' 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.615 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.616 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:49.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:49.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.149 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:49.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:49.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:38:49.150 00:38:49.150 --- 10.0.0.2 ping statistics --- 00:38:49.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.150 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:38:49.150 00:38:49.150 --- 10.0.0.1 ping statistics --- 00:38:49.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.150 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3164822 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3164822 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3164822 ']' 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.150 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:49.150 [2024-11-17 09:38:53.887048] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.150 [2024-11-17 09:38:53.889620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:49.150 [2024-11-17 09:38:53.889741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.150 [2024-11-17 09:38:54.039009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.409 [2024-11-17 09:38:54.172170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.409 [2024-11-17 09:38:54.172253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.409 [2024-11-17 09:38:54.172282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.409 [2024-11-17 09:38:54.172304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.409 [2024-11-17 09:38:54.172327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.409 [2024-11-17 09:38:54.173978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.667 [2024-11-17 09:38:54.548307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.667 [2024-11-17 09:38:54.548769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.926 09:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:50.186 [2024-11-17 09:38:55.107051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:50.186 ************************************ 00:38:50.186 START TEST lvs_grow_clean 00:38:50.186 ************************************ 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:50.186 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:50.444 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:50.444 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:51.010 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=421e09d9-b806-4ce3-b60f-374cac0edcef 00:38:51.010 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:38:51.010 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:51.010 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:51.010 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:51.010 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 421e09d9-b806-4ce3-b60f-374cac0edcef lvol 150 00:38:51.268 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=89f263ff-4247-48e2-b240-7a74de70895f 00:38:51.268 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:51.268 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:51.526 [2024-11-17 09:38:56.530920] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:51.527 [2024-11-17 09:38:56.531084] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:51.527 true 00:38:51.784 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:38:51.784 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:52.042 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:52.042 09:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:52.299 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89f263ff-4247-48e2-b240-7a74de70895f 00:38:52.557 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:52.815 [2024-11-17 09:38:57.631352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.815 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3165269 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3165269 /var/tmp/bdevperf.sock 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3165269 ']' 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:53.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:53.074 09:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:53.074 [2024-11-17 09:38:58.002538] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:38:53.074 [2024-11-17 09:38:58.002686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165269 ] 00:38:53.332 [2024-11-17 09:38:58.139109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.332 [2024-11-17 09:38:58.264137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.266 09:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:54.266 09:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:54.266 09:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:54.524 Nvme0n1 00:38:54.524 09:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:54.782 [ 00:38:54.782 { 00:38:54.782 "name": "Nvme0n1", 00:38:54.782 "aliases": [ 00:38:54.782 "89f263ff-4247-48e2-b240-7a74de70895f" 00:38:54.782 ], 00:38:54.782 "product_name": "NVMe disk", 00:38:54.782 "block_size": 4096, 00:38:54.782 "num_blocks": 38912, 00:38:54.782 "uuid": "89f263ff-4247-48e2-b240-7a74de70895f", 00:38:54.782 "numa_id": 0, 00:38:54.782 "assigned_rate_limits": { 00:38:54.782 "rw_ios_per_sec": 0, 00:38:54.782 "rw_mbytes_per_sec": 0, 00:38:54.782 "r_mbytes_per_sec": 0, 00:38:54.782 "w_mbytes_per_sec": 0 00:38:54.782 }, 00:38:54.782 "claimed": false, 00:38:54.782 "zoned": false, 00:38:54.782 "supported_io_types": { 00:38:54.782 "read": true, 00:38:54.782 "write": true, 00:38:54.782 "unmap": true, 00:38:54.782 "flush": true, 00:38:54.782 "reset": true, 00:38:54.782 "nvme_admin": true, 00:38:54.782 "nvme_io": true, 00:38:54.782 "nvme_io_md": false, 00:38:54.782 "write_zeroes": true, 00:38:54.782 "zcopy": false, 00:38:54.782 "get_zone_info": false, 00:38:54.782 "zone_management": false, 00:38:54.782 "zone_append": false, 00:38:54.782 "compare": true, 00:38:54.782 "compare_and_write": true, 00:38:54.782 "abort": true, 00:38:54.782 "seek_hole": false, 00:38:54.782 "seek_data": false, 00:38:54.782 "copy": true, 00:38:54.782 "nvme_iov_md": false 00:38:54.782 }, 00:38:54.782 "memory_domains": [ 00:38:54.782 { 00:38:54.782 "dma_device_id": "system", 00:38:54.782 "dma_device_type": 1 00:38:54.782 } 00:38:54.782 ], 00:38:54.782 "driver_specific": { 00:38:54.782 "nvme": [ 00:38:54.782 { 00:38:54.782 "trid": { 00:38:54.782 "trtype": "TCP", 00:38:54.782 "adrfam": "IPv4", 00:38:54.782 "traddr": "10.0.0.2", 00:38:54.782 "trsvcid": "4420", 00:38:54.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:54.782 }, 00:38:54.782 "ctrlr_data": { 00:38:54.782 "cntlid": 1, 00:38:54.782 "vendor_id": "0x8086", 00:38:54.782 "model_number": "SPDK bdev Controller", 00:38:54.782 "serial_number": "SPDK0", 00:38:54.782 "firmware_revision": "25.01", 00:38:54.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.782 "oacs": { 00:38:54.782 "security": 0, 00:38:54.782 "format": 0, 00:38:54.782 "firmware": 0, 00:38:54.782 "ns_manage": 0 00:38:54.782 }, 00:38:54.782 "multi_ctrlr": true, 00:38:54.782 "ana_reporting": false 00:38:54.782 }, 00:38:54.782 "vs": { 00:38:54.782 "nvme_version": "1.3" 00:38:54.782 }, 00:38:54.782 "ns_data": { 00:38:54.782 "id": 1, 00:38:54.782 "can_share": true 00:38:54.782 } 00:38:54.782 } 00:38:54.782 ], 00:38:54.782 "mp_policy": "active_passive" 00:38:54.782 } 00:38:54.782 } 00:38:54.782 ] 00:38:54.782 09:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3165524 00:38:54.782 09:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:54.782 09:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:54.782 Running I/O for 10 seconds... 00:38:55.717 Latency(us) 00:38:55.717 [2024-11-17T08:39:00.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:55.717 Nvme0n1 : 1.00 10097.00 39.44 0.00 0.00 0.00 0.00 0.00 00:38:55.717 [2024-11-17T08:39:00.730Z] =================================================================================================================== 00:38:55.717 [2024-11-17T08:39:00.730Z] Total : 10097.00 39.44 0.00 0.00 0.00 0.00 0.00 00:38:55.717 00:38:56.651 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:38:56.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:56.910 Nvme0n1 : 2.00 10287.00 40.18 0.00 0.00 0.00 0.00 0.00 00:38:56.910 [2024-11-17T08:39:01.923Z] =================================================================================================================== 00:38:56.910 [2024-11-17T08:39:01.923Z] Total : 10287.00 40.18 0.00 0.00 0.00 0.00 0.00 00:38:56.910 00:38:56.910 true 00:38:56.910 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:38:56.910 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:57.532 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:57.532 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:57.532 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3165524 00:38:57.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.790 Nvme0n1 : 3.00 10329.33 40.35 0.00 0.00 0.00 0.00 0.00 00:38:57.790 [2024-11-17T08:39:02.803Z] =================================================================================================================== 00:38:57.790 [2024-11-17T08:39:02.803Z] Total : 10329.33 40.35 0.00 0.00 0.00 0.00 0.00 00:38:57.790 00:38:58.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.725 Nvme0n1 : 4.00 10398.25 40.62 0.00 0.00 0.00 0.00 0.00 00:38:58.725 [2024-11-17T08:39:03.738Z] =================================================================================================================== 00:38:58.725 [2024-11-17T08:39:03.738Z] Total : 10398.25 40.62 0.00 0.00 0.00 0.00 0.00 00:38:58.725 00:39:00.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.099 Nvme0n1 : 5.00 10464.80 40.88 0.00 0.00 0.00 0.00 0.00 00:39:00.099 [2024-11-17T08:39:05.112Z] =================================================================================================================== 00:39:00.099 [2024-11-17T08:39:05.112Z] Total : 10464.80 40.88 0.00 0.00 0.00 0.00 0.00 00:39:00.099 00:39:01.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:01.034 Nvme0n1 : 6.00 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:39:01.034 [2024-11-17T08:39:06.047Z] =================================================================================================================== 00:39:01.034 [2024-11-17T08:39:06.047Z] Total : 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:39:01.034 00:39:01.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:01.969 Nvme0n1 : 7.00 10522.86 41.10 0.00 0.00 0.00 0.00 0.00 00:39:01.969 [2024-11-17T08:39:06.982Z] =================================================================================================================== 00:39:01.969 [2024-11-17T08:39:06.982Z] Total : 10522.86 41.10 0.00 0.00 0.00 0.00 0.00 00:39:01.969 00:39:02.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:02.903 Nvme0n1 : 8.00 10525.12 41.11 0.00 0.00 0.00 0.00 0.00 00:39:02.903 [2024-11-17T08:39:07.916Z] =================================================================================================================== 00:39:02.903 [2024-11-17T08:39:07.916Z] Total : 10525.12 41.11 0.00 0.00 0.00 0.00 0.00 00:39:02.903 00:39:03.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:03.838 Nvme0n1 : 9.00 10544.78 41.19 0.00 0.00 0.00 0.00 0.00 00:39:03.838 [2024-11-17T08:39:08.851Z] =================================================================================================================== 00:39:03.838 [2024-11-17T08:39:08.851Z] Total : 10544.78 41.19 0.00 0.00 0.00 0.00 0.00 00:39:03.838 00:39:04.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:04.771 Nvme0n1 : 10.00 10569.80 41.29 0.00 0.00 0.00 0.00 0.00 00:39:04.771 [2024-11-17T08:39:09.784Z] =================================================================================================================== 00:39:04.771 [2024-11-17T08:39:09.784Z] Total : 10569.80 41.29 0.00 0.00 0.00 0.00 0.00 00:39:04.771 00:39:04.771 00:39:04.771 Latency(us) 00:39:04.771 [2024-11-17T08:39:09.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:04.771 Nvme0n1 : 10.01 10571.90 41.30 0.00 0.00 12100.69 5728.33 26602.76 00:39:04.771 [2024-11-17T08:39:09.784Z] =================================================================================================================== 00:39:04.771 [2024-11-17T08:39:09.784Z] Total : 10571.90 41.30 0.00 0.00 12100.69 5728.33 26602.76 00:39:04.771 { 00:39:04.771 "results": [ 00:39:04.771 { 00:39:04.771 "job": "Nvme0n1", 00:39:04.771 "core_mask": "0x2", 00:39:04.771 "workload": "randwrite", 00:39:04.771 "status": "finished", 00:39:04.771 "queue_depth": 128, 00:39:04.771 "io_size": 4096, 00:39:04.771 "runtime": 10.010124, 00:39:04.771 "iops": 10571.897011465593, 00:39:04.771 "mibps": 41.29647270103747, 00:39:04.771 "io_failed": 0, 00:39:04.771 "io_timeout": 0, 00:39:04.771 "avg_latency_us": 12100.688458468863, 00:39:04.771 "min_latency_us": 5728.331851851852, 00:39:04.771 "max_latency_us": 26602.76148148148 00:39:04.771 } 00:39:04.771 ], 00:39:04.771 "core_count": 1 00:39:04.771 } 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3165269 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3165269 ']' 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3165269 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3165269 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3165269' 00:39:04.771 killing process with pid 3165269 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3165269 00:39:04.771 Received shutdown signal, test time was about 10.000000 seconds 00:39:04.771 00:39:04.771 Latency(us) 00:39:04.771 [2024-11-17T08:39:09.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.771 [2024-11-17T08:39:09.784Z] =================================================================================================================== 00:39:04.771 [2024-11-17T08:39:09.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:04.771 09:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3165269 00:39:05.705 09:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:05.964 09:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:06.222 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:06.222 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:06.480 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:06.480 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:06.480 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:06.738 [2024-11-17 09:39:11.742986] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:06.997 09:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:07.255 request: 00:39:07.255 { 00:39:07.255 "uuid": "421e09d9-b806-4ce3-b60f-374cac0edcef", 00:39:07.255 "method": "bdev_lvol_get_lvstores", 00:39:07.255 "req_id": 1 00:39:07.255 } 00:39:07.255 Got JSON-RPC error response 00:39:07.255 response: 00:39:07.255 { 00:39:07.255 "code": -19, 00:39:07.255 "message": "No such device" 00:39:07.255 } 00:39:07.255 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:07.256 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:07.256 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:07.256 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:07.256 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:07.514 aio_bdev 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89f263ff-4247-48e2-b240-7a74de70895f 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=89f263ff-4247-48e2-b240-7a74de70895f 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:07.514 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:07.773 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89f263ff-4247-48e2-b240-7a74de70895f -t 2000 00:39:08.031 [ 00:39:08.031 { 00:39:08.031 "name": "89f263ff-4247-48e2-b240-7a74de70895f", 00:39:08.031 "aliases": [ 00:39:08.031 "lvs/lvol" 00:39:08.031 ], 00:39:08.031 "product_name": "Logical Volume", 00:39:08.031 "block_size": 4096, 00:39:08.031 "num_blocks": 38912, 00:39:08.031 "uuid": "89f263ff-4247-48e2-b240-7a74de70895f", 00:39:08.031 "assigned_rate_limits": { 00:39:08.031 "rw_ios_per_sec": 0, 00:39:08.031 "rw_mbytes_per_sec": 0, 00:39:08.031 "r_mbytes_per_sec": 0, 00:39:08.031 "w_mbytes_per_sec": 0 00:39:08.031 }, 00:39:08.031 "claimed": false, 00:39:08.031 "zoned": false, 00:39:08.031 "supported_io_types": { 00:39:08.031 "read": true, 00:39:08.031 "write": true, 00:39:08.031 "unmap": true, 00:39:08.031 "flush": false, 00:39:08.031 "reset": true, 00:39:08.031 "nvme_admin": false, 00:39:08.031 "nvme_io": false, 00:39:08.031 "nvme_io_md": false, 00:39:08.031 "write_zeroes": true, 00:39:08.031 "zcopy": false, 00:39:08.031 "get_zone_info": false, 00:39:08.031 "zone_management": false, 00:39:08.031 "zone_append": false, 00:39:08.031 "compare": false, 00:39:08.031 "compare_and_write": false, 00:39:08.031 "abort": false, 00:39:08.031 "seek_hole": true, 00:39:08.031 "seek_data": true, 00:39:08.031 "copy": false, 00:39:08.031 "nvme_iov_md": false 00:39:08.031 }, 00:39:08.031 "driver_specific": { 00:39:08.031 "lvol": { 00:39:08.031 "lvol_store_uuid": "421e09d9-b806-4ce3-b60f-374cac0edcef", 00:39:08.031 "base_bdev": "aio_bdev", 00:39:08.031 "thin_provision": false, 00:39:08.031 "num_allocated_clusters": 38, 00:39:08.031 "snapshot": false, 00:39:08.031 "clone": false, 00:39:08.031 "esnap_clone": false 00:39:08.031 } 00:39:08.031 } 00:39:08.031 } 00:39:08.031 ] 00:39:08.031 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:08.031 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:08.031 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:08.289 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:08.289 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:08.289 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:08.547 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:08.547 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89f263ff-4247-48e2-b240-7a74de70895f 00:39:08.805 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 421e09d9-b806-4ce3-b60f-374cac0edcef 00:39:09.062 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:09.629 00:39:09.629 real 0m19.224s 00:39:09.629 user 0m18.974s 00:39:09.629 sys 0m1.914s 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:09.629 ************************************ 00:39:09.629 END TEST lvs_grow_clean 00:39:09.629 ************************************ 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:09.629 ************************************ 00:39:09.629 START TEST lvs_grow_dirty 00:39:09.629 ************************************ 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:09.629 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:09.888 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:09.888 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:10.146 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:10.146 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:10.146 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:10.405 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:10.405 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:10.405 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 lvol 150 00:39:10.663 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:10.663 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:10.663 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:10.921 [2024-11-17 09:39:15.854860] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:10.921 [2024-11-17 09:39:15.854985] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:10.921 true 00:39:10.921 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:10.921 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:11.179 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:11.179 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:11.438 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:11.696 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:12.262 [2024-11-17 09:39:16.995393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.262 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3168180 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3168180 /var/tmp/bdevperf.sock 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3168180 ']' 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:12.520 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:12.520 [2024-11-17 09:39:17.416028] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:12.520 [2024-11-17 09:39:17.416164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168180 ] 00:39:12.779 [2024-11-17 09:39:17.559867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.779 [2024-11-17 09:39:17.692387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.712 09:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.712 09:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:13.712 09:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:13.970 Nvme0n1 00:39:13.970 09:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:14.228 [ 00:39:14.228 { 00:39:14.228 "name": "Nvme0n1", 00:39:14.228 "aliases": [ 00:39:14.228 "5d64ca39-650f-4e5e-9a46-1b3539473624" 00:39:14.228 ], 00:39:14.228 "product_name": "NVMe disk", 00:39:14.228 "block_size": 4096, 00:39:14.228 "num_blocks": 38912, 00:39:14.228 "uuid": "5d64ca39-650f-4e5e-9a46-1b3539473624", 00:39:14.228 "numa_id": 0, 00:39:14.228 "assigned_rate_limits": { 00:39:14.228 "rw_ios_per_sec": 0, 00:39:14.228 "rw_mbytes_per_sec": 0, 00:39:14.228 "r_mbytes_per_sec": 0, 00:39:14.228 "w_mbytes_per_sec": 0 00:39:14.228 }, 00:39:14.228 "claimed": false, 00:39:14.228 "zoned": false, 00:39:14.228 "supported_io_types": { 00:39:14.228 "read": true, 00:39:14.228 "write": true, 00:39:14.228 "unmap": true, 00:39:14.228 "flush": true, 00:39:14.228 "reset": true, 00:39:14.228 "nvme_admin": true, 00:39:14.228 "nvme_io": true, 00:39:14.228 "nvme_io_md": false, 00:39:14.228 "write_zeroes": true, 00:39:14.228 "zcopy": false, 00:39:14.228 "get_zone_info": false, 00:39:14.228 "zone_management": false, 00:39:14.228 "zone_append": false, 00:39:14.228 "compare": true, 00:39:14.228 "compare_and_write": true, 00:39:14.228 "abort": true, 00:39:14.228 "seek_hole": false, 00:39:14.228 "seek_data": false, 00:39:14.228 "copy": true, 00:39:14.228 "nvme_iov_md": false 00:39:14.228 }, 00:39:14.228 "memory_domains": [ 00:39:14.228 { 00:39:14.228 "dma_device_id": "system", 00:39:14.228 "dma_device_type": 1 00:39:14.228 } 00:39:14.228 ], 00:39:14.228 "driver_specific": { 00:39:14.228 "nvme": [ 00:39:14.228 { 00:39:14.228 "trid": { 00:39:14.228 "trtype": "TCP", 00:39:14.228 "adrfam": "IPv4", 00:39:14.228 "traddr": "10.0.0.2", 00:39:14.228 "trsvcid": "4420", 00:39:14.228 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:14.228 }, 00:39:14.228 "ctrlr_data": { 00:39:14.228 "cntlid": 1, 00:39:14.228 "vendor_id": "0x8086", 00:39:14.228 "model_number": "SPDK bdev Controller", 00:39:14.228 "serial_number": "SPDK0", 00:39:14.228 "firmware_revision": "25.01", 00:39:14.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.228 "oacs": { 00:39:14.228 "security": 0, 00:39:14.228 "format": 0, 00:39:14.228 "firmware": 0, 00:39:14.228 "ns_manage": 0 00:39:14.228 }, 00:39:14.228 "multi_ctrlr": true, 00:39:14.228 "ana_reporting": false 00:39:14.228 }, 00:39:14.228 "vs": { 00:39:14.228 "nvme_version": "1.3" 00:39:14.228 }, 00:39:14.228 "ns_data": { 00:39:14.228 "id": 1, 00:39:14.228 "can_share": true 00:39:14.228 } 00:39:14.228 } 00:39:14.228 ], 00:39:14.228 "mp_policy": "active_passive" 00:39:14.228 } 00:39:14.228 } 00:39:14.228 ] 00:39:14.228 09:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3168433 00:39:14.228 09:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:14.229 09:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:14.486 Running I/O for 10 seconds... 00:39:15.418 Latency(us) 00:39:15.418 [2024-11-17T08:39:20.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:15.418 Nvme0n1 : 1.00 10287.00 40.18 0.00 0.00 0.00 0.00 0.00 00:39:15.418 [2024-11-17T08:39:20.431Z] =================================================================================================================== 00:39:15.418 [2024-11-17T08:39:20.431Z] Total : 10287.00 40.18 0.00 0.00 0.00 0.00 0.00 00:39:15.418 00:39:16.353 09:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:16.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.353 Nvme0n1 : 2.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:16.353 [2024-11-17T08:39:21.366Z] =================================================================================================================== 00:39:16.353 [2024-11-17T08:39:21.366Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:16.353 00:39:16.611 true 00:39:16.611 09:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:16.611 09:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:16.872 09:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:16.872 09:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:16.872 09:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3168433 00:39:17.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:17.446 Nvme0n1 : 3.00 10467.67 40.89 0.00 0.00 0.00 0.00 0.00 00:39:17.446 [2024-11-17T08:39:22.459Z] =================================================================================================================== 00:39:17.446 [2024-11-17T08:39:22.459Z] Total : 10467.67 40.89 0.00 0.00 0.00 0.00 0.00 00:39:17.446 00:39:18.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:18.380 Nvme0n1 : 4.00 10533.75 41.15 0.00 0.00 0.00 0.00 0.00 00:39:18.380 [2024-11-17T08:39:23.393Z] =================================================================================================================== 00:39:18.380 [2024-11-17T08:39:23.393Z] Total : 10533.75 41.15 0.00 0.00 0.00 0.00 0.00 00:39:18.380 00:39:19.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:19.313 Nvme0n1 : 5.00 10573.20 41.30 0.00 0.00 0.00 0.00 0.00 00:39:19.313 [2024-11-17T08:39:24.326Z] =================================================================================================================== 00:39:19.313 [2024-11-17T08:39:24.326Z] Total : 10573.20 41.30 0.00 0.00 0.00 0.00 0.00 00:39:19.313 00:39:20.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:20.690 Nvme0n1 : 6.00 10631.33 41.53 0.00 0.00 0.00 0.00 0.00 00:39:20.690 [2024-11-17T08:39:25.703Z] =================================================================================================================== 00:39:20.690 [2024-11-17T08:39:25.703Z] Total : 10631.33 41.53 0.00 0.00 0.00 0.00 0.00 00:39:20.690 00:39:21.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:21.624 Nvme0n1 : 7.00 10727.29 41.90 0.00 0.00 0.00 0.00 0.00 00:39:21.624 [2024-11-17T08:39:26.637Z] =================================================================================================================== 00:39:21.624 [2024-11-17T08:39:26.637Z] Total : 10727.29 41.90 0.00 0.00 0.00 0.00 0.00 00:39:21.624 00:39:22.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:22.558 Nvme0n1 : 8.00 10735.75 41.94 0.00 0.00 0.00 0.00 0.00 00:39:22.558 [2024-11-17T08:39:27.571Z] =================================================================================================================== 00:39:22.558 [2024-11-17T08:39:27.571Z] Total : 10735.75 41.94 0.00 0.00 0.00 0.00 0.00 00:39:22.558 00:39:23.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:23.545 Nvme0n1 : 9.00 10728.22 41.91 0.00 0.00 0.00 0.00 0.00 00:39:23.545 [2024-11-17T08:39:28.558Z] =================================================================================================================== 00:39:23.545 [2024-11-17T08:39:28.558Z] Total : 10728.22 41.91 0.00 0.00 0.00 0.00 0.00 00:39:23.545 00:39:24.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:24.481 Nvme0n1 : 10.00 10722.20 41.88 0.00 0.00 0.00 0.00 0.00 00:39:24.481 [2024-11-17T08:39:29.494Z] =================================================================================================================== 00:39:24.481 [2024-11-17T08:39:29.494Z] Total : 10722.20 41.88 0.00 0.00 0.00 0.00 0.00 00:39:24.481 00:39:24.481 00:39:24.481 Latency(us) 00:39:24.481 [2024-11-17T08:39:29.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:24.481 Nvme0n1 : 10.01 10724.53 41.89 0.00 0.00 11928.54 5655.51 25049.32 00:39:24.481 [2024-11-17T08:39:29.494Z] =================================================================================================================== 00:39:24.481 [2024-11-17T08:39:29.494Z] Total : 10724.53 41.89 0.00 0.00 11928.54 5655.51 25049.32 00:39:24.481 { 00:39:24.481 "results": [ 00:39:24.481 { 00:39:24.481 "job": "Nvme0n1", 00:39:24.481 "core_mask": "0x2", 00:39:24.481 "workload": "randwrite", 00:39:24.481 "status": "finished", 00:39:24.481 "queue_depth": 128, 00:39:24.481 "io_size": 4096, 00:39:24.481 "runtime": 10.009762, 00:39:24.481 "iops": 10724.530713117854, 00:39:24.481 "mibps": 41.89269809811662, 00:39:24.481 "io_failed": 0, 00:39:24.481 "io_timeout": 0, 00:39:24.481 "avg_latency_us": 11928.539070358294, 00:39:24.481 "min_latency_us": 5655.514074074074, 00:39:24.481 "max_latency_us": 25049.315555555557 00:39:24.481 } 00:39:24.481 ], 00:39:24.481 "core_count": 1 00:39:24.481 } 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3168180 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3168180 ']' 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3168180 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168180 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168180' 00:39:24.481 killing process with pid 3168180 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3168180 00:39:24.481 Received shutdown signal, test time was about 10.000000 seconds 00:39:24.481 00:39:24.481 Latency(us) 00:39:24.481 [2024-11-17T08:39:29.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.481 [2024-11-17T08:39:29.494Z] =================================================================================================================== 00:39:24.481 [2024-11-17T08:39:29.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:24.481 09:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3168180 00:39:25.415 09:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:25.674 09:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:25.932 09:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:25.932 09:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3164822 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3164822 00:39:26.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3164822 Killed "${NVMF_APP[@]}" "$@" 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:26.190 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3169765 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3169765 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3169765 ']' 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.191 09:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:26.449 [2024-11-17 09:39:31.238530] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:26.449 [2024-11-17 09:39:31.241146] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:26.449 [2024-11-17 09:39:31.241250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.449 [2024-11-17 09:39:31.397112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.707 [2024-11-17 09:39:31.527003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:26.707 [2024-11-17 09:39:31.527079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:26.707 [2024-11-17 09:39:31.527114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:26.707 [2024-11-17 09:39:31.527135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:26.707 [2024-11-17 09:39:31.527157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:26.707 [2024-11-17 09:39:31.528768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.964 [2024-11-17 09:39:31.900539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:26.964 [2024-11-17 09:39:31.901005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:27.221 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:27.479 [2024-11-17 09:39:32.489213] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:27.479 [2024-11-17 09:39:32.489506] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:27.479 [2024-11-17 09:39:32.489619] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:27.737 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:27.737 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:27.738 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:27.738 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:27.738 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:27.738 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:27.738 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:27.738 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:27.996 09:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5d64ca39-650f-4e5e-9a46-1b3539473624 -t 2000 00:39:28.254 [ 00:39:28.254 { 00:39:28.254 "name": "5d64ca39-650f-4e5e-9a46-1b3539473624", 00:39:28.254 "aliases": [ 00:39:28.254 "lvs/lvol" 00:39:28.254 ], 00:39:28.254 "product_name": "Logical Volume", 00:39:28.254 "block_size": 4096, 00:39:28.254 "num_blocks": 38912, 00:39:28.254 "uuid": "5d64ca39-650f-4e5e-9a46-1b3539473624", 00:39:28.254 "assigned_rate_limits": { 00:39:28.254 "rw_ios_per_sec": 0, 00:39:28.254 "rw_mbytes_per_sec": 0, 00:39:28.254 "r_mbytes_per_sec": 0, 00:39:28.254 "w_mbytes_per_sec": 0 00:39:28.254 }, 00:39:28.254 "claimed": false, 00:39:28.254 "zoned": false, 00:39:28.254 "supported_io_types": { 00:39:28.254 "read": true, 00:39:28.254 "write": true, 00:39:28.254 "unmap": true, 00:39:28.254 "flush": false, 00:39:28.254 "reset": true, 00:39:28.254 "nvme_admin": false, 00:39:28.254 "nvme_io": false, 00:39:28.254 "nvme_io_md": false, 00:39:28.254 "write_zeroes": true, 00:39:28.254 "zcopy": false, 00:39:28.254 "get_zone_info": false, 00:39:28.254 "zone_management": false, 00:39:28.254 "zone_append": false, 00:39:28.254 "compare": false, 00:39:28.254 "compare_and_write": false, 00:39:28.254 "abort": false, 00:39:28.254 "seek_hole": true, 00:39:28.254 "seek_data": true, 00:39:28.254 "copy": false, 00:39:28.254 "nvme_iov_md": false 00:39:28.254 }, 00:39:28.254 "driver_specific": { 00:39:28.254 "lvol": { 00:39:28.254 "lvol_store_uuid": "06ff4094-7a1d-4124-9bee-a7f0e8f19ec5", 00:39:28.254 "base_bdev": "aio_bdev", 00:39:28.254 "thin_provision": false, 00:39:28.254 "num_allocated_clusters": 38, 00:39:28.254 "snapshot": false, 00:39:28.254 "clone": false, 00:39:28.254 "esnap_clone": false 00:39:28.254 } 00:39:28.254 } 00:39:28.254 } 00:39:28.254 ] 00:39:28.254 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:28.254 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:28.254 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:28.513 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:28.513 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:28.513 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:28.771 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:28.771 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:29.029 [2024-11-17 09:39:33.913752] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:29.029 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:29.029 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:29.029 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:29.030 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:29.288 request: 00:39:29.288 { 00:39:29.288 "uuid": "06ff4094-7a1d-4124-9bee-a7f0e8f19ec5", 00:39:29.288 "method": "bdev_lvol_get_lvstores", 00:39:29.288 "req_id": 1 00:39:29.288 } 00:39:29.288 Got JSON-RPC error response 00:39:29.288 response: 00:39:29.288 { 00:39:29.288 "code": -19, 00:39:29.288 "message": "No such device" 00:39:29.288 } 00:39:29.288 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:29.288 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:29.288 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:29.288 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:29.288 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:29.546 aio_bdev 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:29.546 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:29.804 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5d64ca39-650f-4e5e-9a46-1b3539473624 -t 2000 00:39:30.063 [ 00:39:30.063 { 00:39:30.063 "name": "5d64ca39-650f-4e5e-9a46-1b3539473624", 00:39:30.063 "aliases": [ 00:39:30.063 "lvs/lvol" 00:39:30.063 ], 00:39:30.063 "product_name": "Logical Volume", 00:39:30.063 "block_size": 4096, 00:39:30.063 "num_blocks": 38912, 00:39:30.063 "uuid": "5d64ca39-650f-4e5e-9a46-1b3539473624", 00:39:30.063 "assigned_rate_limits": { 00:39:30.063 "rw_ios_per_sec": 0, 00:39:30.063 "rw_mbytes_per_sec": 0, 00:39:30.063 "r_mbytes_per_sec": 0, 00:39:30.063 "w_mbytes_per_sec": 0 00:39:30.063 }, 00:39:30.063 "claimed": false, 00:39:30.063 "zoned": false, 00:39:30.063 "supported_io_types": { 00:39:30.063 "read": true, 00:39:30.063 "write": true, 00:39:30.063 "unmap": true, 00:39:30.063 "flush": false, 00:39:30.063 "reset": true, 00:39:30.063 "nvme_admin": false, 00:39:30.063 "nvme_io": false, 00:39:30.063 "nvme_io_md": false, 00:39:30.063 "write_zeroes": true, 00:39:30.063 "zcopy": false, 00:39:30.063 "get_zone_info": false, 00:39:30.063 "zone_management": false, 00:39:30.063 "zone_append": false, 00:39:30.063 "compare": false, 00:39:30.063 "compare_and_write": false, 00:39:30.063 "abort": false, 00:39:30.063 "seek_hole": true, 00:39:30.063 "seek_data": true, 00:39:30.063 "copy": false, 00:39:30.063 "nvme_iov_md": false 00:39:30.063 }, 00:39:30.063 "driver_specific": { 00:39:30.063 "lvol": { 00:39:30.063 "lvol_store_uuid": "06ff4094-7a1d-4124-9bee-a7f0e8f19ec5", 00:39:30.063 "base_bdev": "aio_bdev", 00:39:30.063 "thin_provision": false, 00:39:30.063 "num_allocated_clusters": 38, 00:39:30.063 "snapshot": false, 00:39:30.063 "clone": false, 00:39:30.063 "esnap_clone": false 00:39:30.063 } 00:39:30.063 } 00:39:30.063 } 00:39:30.063 ] 00:39:30.063 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:30.063 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:30.063 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:30.322 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:30.322 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:30.322 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:30.889 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:30.889 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5d64ca39-650f-4e5e-9a46-1b3539473624 00:39:30.889 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06ff4094-7a1d-4124-9bee-a7f0e8f19ec5 00:39:31.455 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:31.714 00:39:31.714 real 0m22.082s 00:39:31.714 user 0m39.597s 00:39:31.714 sys 0m4.591s 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:31.714 ************************************ 00:39:31.714 END TEST lvs_grow_dirty 00:39:31.714 ************************************ 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:31.714 nvmf_trace.0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.714 rmmod nvme_tcp 00:39:31.714 rmmod nvme_fabrics 00:39:31.714 rmmod nvme_keyring 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3169765 ']' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3169765 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3169765 ']' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3169765 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169765 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169765' 00:39:31.714 killing process with pid 3169765 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3169765 00:39:31.714 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3169765 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.089 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:34.991 00:39:34.991 real 0m48.461s 00:39:34.991 user 1m1.868s 00:39:34.991 sys 0m8.649s 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:34.991 ************************************ 00:39:34.991 END TEST nvmf_lvs_grow 00:39:34.991 ************************************ 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:34.991 ************************************ 00:39:34.991 START TEST nvmf_bdev_io_wait 00:39:34.991 ************************************ 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:34.991 * Looking for test storage... 00:39:34.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:39:34.991 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:35.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.250 --rc genhtml_branch_coverage=1 00:39:35.250 --rc genhtml_function_coverage=1 00:39:35.250 --rc genhtml_legend=1 00:39:35.250 --rc geninfo_all_blocks=1 00:39:35.250 --rc geninfo_unexecuted_blocks=1 00:39:35.250 00:39:35.250 ' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:35.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.250 --rc genhtml_branch_coverage=1 00:39:35.250 --rc genhtml_function_coverage=1 00:39:35.250 --rc genhtml_legend=1 00:39:35.250 --rc geninfo_all_blocks=1 00:39:35.250 --rc geninfo_unexecuted_blocks=1 00:39:35.250 00:39:35.250 ' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:35.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.250 --rc genhtml_branch_coverage=1 00:39:35.250 --rc genhtml_function_coverage=1 00:39:35.250 --rc genhtml_legend=1 00:39:35.250 --rc geninfo_all_blocks=1 00:39:35.250 --rc geninfo_unexecuted_blocks=1 00:39:35.250 00:39:35.250 ' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:35.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.250 --rc genhtml_branch_coverage=1 00:39:35.250 --rc genhtml_function_coverage=1 00:39:35.250 --rc genhtml_legend=1 00:39:35.250 --rc geninfo_all_blocks=1 00:39:35.250 --rc geninfo_unexecuted_blocks=1 00:39:35.250 00:39:35.250 ' 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.250 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:35.251 09:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.154 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:37.154 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:37.154 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:37.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:37.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:37.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:37.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:37.155 09:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.155 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:39:37.156 00:39:37.156 --- 10.0.0.2 ping statistics --- 00:39:37.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.156 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:39:37.156 00:39:37.156 --- 10.0.0.1 ping statistics --- 00:39:37.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.156 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3172540 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3172540 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3172540 ']' 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.156 09:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.414 [2024-11-17 09:39:42.180741] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.414 [2024-11-17 09:39:42.183193] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:37.414 [2024-11-17 09:39:42.183301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.414 [2024-11-17 09:39:42.323527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:37.673 [2024-11-17 09:39:42.449374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.673 [2024-11-17 09:39:42.449447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.673 [2024-11-17 09:39:42.449472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.673 [2024-11-17 09:39:42.449491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.673 [2024-11-17 09:39:42.449511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.673 [2024-11-17 09:39:42.452055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.673 [2024-11-17 09:39:42.452128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:37.673 [2024-11-17 09:39:42.452194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.673 [2024-11-17 09:39:42.452203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:37.673 [2024-11-17 09:39:42.452886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.239 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.240 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:38.240 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.240 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.498 [2024-11-17 09:39:43.436915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.498 [2024-11-17 09:39:43.438011] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.498 [2024-11-17 09:39:43.439099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:38.498 [2024-11-17 09:39:43.440137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.498 [2024-11-17 09:39:43.445172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.498 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.757 Malloc0 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.757 [2024-11-17 09:39:43.565453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3172704 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3172706 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.757 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.757 { 00:39:38.757 "params": { 00:39:38.757 "name": "Nvme$subsystem", 00:39:38.757 "trtype": "$TEST_TRANSPORT", 00:39:38.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.757 "adrfam": "ipv4", 00:39:38.757 "trsvcid": "$NVMF_PORT", 00:39:38.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.757 "hdgst": ${hdgst:-false}, 00:39:38.757 "ddgst": ${ddgst:-false} 00:39:38.757 }, 00:39:38.757 "method": "bdev_nvme_attach_controller" 00:39:38.757 } 00:39:38.757 EOF 00:39:38.757 )") 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3172708 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.758 { 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme$subsystem", 00:39:38.758 "trtype": "$TEST_TRANSPORT", 00:39:38.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "$NVMF_PORT", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.758 "hdgst": ${hdgst:-false}, 00:39:38.758 "ddgst": ${ddgst:-false} 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 } 00:39:38.758 EOF 00:39:38.758 )") 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3172711 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.758 { 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme$subsystem", 00:39:38.758 "trtype": "$TEST_TRANSPORT", 00:39:38.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "$NVMF_PORT", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.758 "hdgst": ${hdgst:-false}, 00:39:38.758 "ddgst": ${ddgst:-false} 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 } 00:39:38.758 EOF 00:39:38.758 )") 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.758 { 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme$subsystem", 00:39:38.758 "trtype": "$TEST_TRANSPORT", 00:39:38.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "$NVMF_PORT", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.758 "hdgst": ${hdgst:-false}, 00:39:38.758 "ddgst": ${ddgst:-false} 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 } 00:39:38.758 EOF 00:39:38.758 )") 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3172704 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme1", 00:39:38.758 "trtype": "tcp", 00:39:38.758 "traddr": "10.0.0.2", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "4420", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.758 "hdgst": false, 00:39:38.758 "ddgst": false 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 }' 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme1", 00:39:38.758 "trtype": "tcp", 00:39:38.758 "traddr": "10.0.0.2", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "4420", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.758 "hdgst": false, 00:39:38.758 "ddgst": false 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 }' 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme1", 00:39:38.758 "trtype": "tcp", 00:39:38.758 "traddr": "10.0.0.2", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "4420", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.758 "hdgst": false, 00:39:38.758 "ddgst": false 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 }' 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.758 09:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.758 "params": { 00:39:38.758 "name": "Nvme1", 00:39:38.758 "trtype": "tcp", 00:39:38.758 "traddr": "10.0.0.2", 00:39:38.758 "adrfam": "ipv4", 00:39:38.758 "trsvcid": "4420", 00:39:38.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.758 "hdgst": false, 00:39:38.758 "ddgst": false 00:39:38.758 }, 00:39:38.758 "method": "bdev_nvme_attach_controller" 00:39:38.758 }' 00:39:38.758 [2024-11-17 09:39:43.655719] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:38.758 [2024-11-17 09:39:43.655719] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:38.758 [2024-11-17 09:39:43.655719] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:38.758 [2024-11-17 09:39:43.655719] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:38.758 [2024-11-17 09:39:43.655873] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 09:39:43.655874] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 09:39:43.655875] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-11-17 09:39:43.655878] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:38.758 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:38.758 --proc-type=auto ] 00:39:38.758 --proc-type=auto ] 00:39:39.017 [2024-11-17 09:39:43.905990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.017 [2024-11-17 09:39:44.016192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.017 [2024-11-17 09:39:44.027986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:39.275 [2024-11-17 09:39:44.113391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.275 [2024-11-17 09:39:44.137674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:39.275 [2024-11-17 09:39:44.214364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.275 [2024-11-17 09:39:44.236577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:39.533 [2024-11-17 09:39:44.337899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:39.792 Running I/O for 1 seconds... 00:39:39.792 Running I/O for 1 seconds... 00:39:39.792 Running I/O for 1 seconds... 00:39:39.792 Running I/O for 1 seconds... 00:39:40.727 146472.00 IOPS, 572.16 MiB/s 00:39:40.727 Latency(us) 00:39:40.727 [2024-11-17T08:39:45.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.727 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:40.727 Nvme1n1 : 1.00 146164.72 570.96 0.00 0.00 871.30 371.67 2051.03 00:39:40.727 [2024-11-17T08:39:45.740Z] =================================================================================================================== 00:39:40.727 [2024-11-17T08:39:45.740Z] Total : 146164.72 570.96 0.00 0.00 871.30 371.67 2051.03 00:39:40.727 6559.00 IOPS, 25.62 MiB/s 00:39:40.727 Latency(us) 00:39:40.727 [2024-11-17T08:39:45.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.727 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:40.727 Nvme1n1 : 1.01 6604.23 25.80 0.00 0.00 19261.90 5995.33 22816.24 00:39:40.727 [2024-11-17T08:39:45.740Z] =================================================================================================================== 00:39:40.727 [2024-11-17T08:39:45.740Z] Total : 6604.23 25.80 0.00 0.00 19261.90 5995.33 22816.24 00:39:40.727 7078.00 IOPS, 27.65 MiB/s 00:39:40.727 Latency(us) 00:39:40.727 [2024-11-17T08:39:45.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.727 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:40.727 Nvme1n1 : 1.01 7141.93 27.90 0.00 0.00 17828.01 2366.58 26020.22 00:39:40.727 [2024-11-17T08:39:45.740Z] =================================================================================================================== 00:39:40.727 [2024-11-17T08:39:45.740Z] Total : 7141.93 27.90 0.00 0.00 17828.01 2366.58 26020.22 00:39:40.984 7066.00 IOPS, 27.60 MiB/s 00:39:40.985 Latency(us) 00:39:40.985 [2024-11-17T08:39:45.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.985 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:40.985 Nvme1n1 : 1.01 7146.13 27.91 0.00 0.00 17827.66 3835.07 27962.03 00:39:40.985 [2024-11-17T08:39:45.998Z] =================================================================================================================== 00:39:40.985 [2024-11-17T08:39:45.998Z] Total : 7146.13 27.91 0.00 0.00 17827.66 3835.07 27962.03 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3172706 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3172708 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3172711 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:41.551 rmmod nvme_tcp 00:39:41.551 rmmod nvme_fabrics 00:39:41.551 rmmod nvme_keyring 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3172540 ']' 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3172540 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3172540 ']' 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3172540 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172540 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172540' 00:39:41.551 killing process with pid 3172540 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3172540 00:39:41.551 09:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3172540 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.926 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.845 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:44.845 00:39:44.845 real 0m9.694s 00:39:44.845 user 0m21.504s 00:39:44.845 sys 0m4.939s 00:39:44.845 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:44.845 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:44.845 ************************************ 00:39:44.845 END TEST nvmf_bdev_io_wait 00:39:44.845 ************************************ 00:39:44.845 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:44.845 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:44.846 ************************************ 00:39:44.846 START TEST nvmf_queue_depth 00:39:44.846 ************************************ 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:44.846 * Looking for test storage... 00:39:44.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:44.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.846 --rc genhtml_branch_coverage=1 00:39:44.846 --rc genhtml_function_coverage=1 00:39:44.846 --rc genhtml_legend=1 00:39:44.846 --rc geninfo_all_blocks=1 00:39:44.846 --rc geninfo_unexecuted_blocks=1 00:39:44.846 00:39:44.846 ' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:44.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.846 --rc genhtml_branch_coverage=1 00:39:44.846 --rc genhtml_function_coverage=1 00:39:44.846 --rc genhtml_legend=1 00:39:44.846 --rc geninfo_all_blocks=1 00:39:44.846 --rc geninfo_unexecuted_blocks=1 00:39:44.846 00:39:44.846 ' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:44.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.846 --rc genhtml_branch_coverage=1 00:39:44.846 --rc genhtml_function_coverage=1 00:39:44.846 --rc genhtml_legend=1 00:39:44.846 --rc geninfo_all_blocks=1 00:39:44.846 --rc geninfo_unexecuted_blocks=1 00:39:44.846 00:39:44.846 ' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:44.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.846 --rc genhtml_branch_coverage=1 00:39:44.846 --rc genhtml_function_coverage=1 00:39:44.846 --rc genhtml_legend=1 00:39:44.846 --rc geninfo_all_blocks=1 00:39:44.846 --rc geninfo_unexecuted_blocks=1 00:39:44.846 00:39:44.846 ' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.846 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:44.847 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.747 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:46.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:47.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:47.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:47.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:47.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:47.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:39:47.006 00:39:47.006 --- 10.0.0.2 ping statistics --- 00:39:47.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:47.006 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:47.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:47.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:39:47.006 00:39:47.006 --- 10.0.0.1 ping statistics --- 00:39:47.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:47.006 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:47.006 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3175182 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3175182 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3175182 ']' 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:47.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:47.007 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:47.265 [2024-11-17 09:39:52.017860] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:47.265 [2024-11-17 09:39:52.020344] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:47.265 [2024-11-17 09:39:52.020478] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:47.265 [2024-11-17 09:39:52.174373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.522 [2024-11-17 09:39:52.310420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:47.522 [2024-11-17 09:39:52.310499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:47.522 [2024-11-17 09:39:52.310523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:47.522 [2024-11-17 09:39:52.310542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:47.522 [2024-11-17 09:39:52.310561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:47.522 [2024-11-17 09:39:52.312186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.781 [2024-11-17 09:39:52.682347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:47.781 [2024-11-17 09:39:52.682789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.039 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.039 [2024-11-17 09:39:52.997305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:48.039 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.039 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:48.039 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.039 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.297 Malloc0 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.298 [2024-11-17 09:39:53.117504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3175337 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3175337 /var/tmp/bdevperf.sock 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3175337 ']' 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:48.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.298 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.298 [2024-11-17 09:39:53.204923] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:39:48.298 [2024-11-17 09:39:53.205068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175337 ] 00:39:48.556 [2024-11-17 09:39:53.362283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.556 [2024-11-17 09:39:53.490064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.490 NVMe0n1 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.490 09:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:49.490 Running I/O for 10 seconds... 00:39:51.437 5389.00 IOPS, 21.05 MiB/s [2024-11-17T08:39:57.382Z] 5641.50 IOPS, 22.04 MiB/s [2024-11-17T08:39:58.759Z] 5802.67 IOPS, 22.67 MiB/s [2024-11-17T08:39:59.693Z] 5888.00 IOPS, 23.00 MiB/s [2024-11-17T08:40:00.626Z] 5937.40 IOPS, 23.19 MiB/s [2024-11-17T08:40:01.559Z] 5973.00 IOPS, 23.33 MiB/s [2024-11-17T08:40:02.491Z] 5987.71 IOPS, 23.39 MiB/s [2024-11-17T08:40:03.425Z] 6000.38 IOPS, 23.44 MiB/s [2024-11-17T08:40:04.798Z] 6029.11 IOPS, 23.55 MiB/s [2024-11-17T08:40:04.798Z] 6036.80 IOPS, 23.58 MiB/s 00:39:59.785 Latency(us) 00:39:59.785 [2024-11-17T08:40:04.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.785 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:59.785 Verification LBA range: start 0x0 length 0x4000 00:39:59.785 NVMe0n1 : 10.15 6047.80 23.62 0.00 0.00 168385.47 27379.48 100973.99 00:39:59.785 [2024-11-17T08:40:04.798Z] =================================================================================================================== 00:39:59.785 [2024-11-17T08:40:04.798Z] Total : 6047.80 23.62 0.00 0.00 168385.47 27379.48 100973.99 00:39:59.785 { 00:39:59.785 "results": [ 00:39:59.785 { 00:39:59.785 "job": "NVMe0n1", 00:39:59.785 "core_mask": "0x1", 00:39:59.785 "workload": "verify", 00:39:59.785 "status": "finished", 00:39:59.785 "verify_range": { 00:39:59.785 "start": 0, 00:39:59.785 "length": 16384 00:39:59.785 }, 00:39:59.785 "queue_depth": 1024, 00:39:59.785 "io_size": 4096, 00:39:59.785 "runtime": 10.151127, 00:39:59.785 "iops": 6047.801391904564, 00:39:59.785 "mibps": 23.624224187127204, 00:39:59.785 "io_failed": 0, 00:39:59.785 "io_timeout": 0, 00:39:59.785 "avg_latency_us": 168385.46838174114, 00:39:59.785 "min_latency_us": 27379.484444444446, 00:39:59.785 "max_latency_us": 100973.98518518519 00:39:59.785 } 00:39:59.785 ], 00:39:59.785 "core_count": 1 00:39:59.785 } 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3175337 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3175337 ']' 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3175337 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175337 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175337' 00:39:59.785 killing process with pid 3175337 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3175337 00:39:59.785 Received shutdown signal, test time was about 10.000000 seconds 00:39:59.785 00:39:59.785 Latency(us) 00:39:59.785 [2024-11-17T08:40:04.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.785 [2024-11-17T08:40:04.798Z] =================================================================================================================== 00:39:59.785 [2024-11-17T08:40:04.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:59.785 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3175337 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.719 rmmod nvme_tcp 00:40:00.719 rmmod nvme_fabrics 00:40:00.719 rmmod nvme_keyring 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3175182 ']' 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3175182 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3175182 ']' 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3175182 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175182 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175182' 00:40:00.719 killing process with pid 3175182 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3175182 00:40:00.719 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3175182 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.093 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.995 00:40:03.995 real 0m19.344s 00:40:03.995 user 0m26.677s 00:40:03.995 sys 0m3.781s 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:03.995 ************************************ 00:40:03.995 END TEST nvmf_queue_depth 00:40:03.995 ************************************ 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.995 09:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:04.255 ************************************ 00:40:04.255 START TEST nvmf_target_multipath 00:40:04.255 ************************************ 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:04.255 * Looking for test storage... 00:40:04.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:04.255 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:04.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.256 --rc genhtml_branch_coverage=1 00:40:04.256 --rc genhtml_function_coverage=1 00:40:04.256 --rc genhtml_legend=1 00:40:04.256 --rc geninfo_all_blocks=1 00:40:04.256 --rc geninfo_unexecuted_blocks=1 00:40:04.256 00:40:04.256 ' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:04.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.256 --rc genhtml_branch_coverage=1 00:40:04.256 --rc genhtml_function_coverage=1 00:40:04.256 --rc genhtml_legend=1 00:40:04.256 --rc geninfo_all_blocks=1 00:40:04.256 --rc geninfo_unexecuted_blocks=1 00:40:04.256 00:40:04.256 ' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:04.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.256 --rc genhtml_branch_coverage=1 00:40:04.256 --rc genhtml_function_coverage=1 00:40:04.256 --rc genhtml_legend=1 00:40:04.256 --rc geninfo_all_blocks=1 00:40:04.256 --rc geninfo_unexecuted_blocks=1 00:40:04.256 00:40:04.256 ' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:04.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.256 --rc genhtml_branch_coverage=1 00:40:04.256 --rc genhtml_function_coverage=1 00:40:04.256 --rc genhtml_legend=1 00:40:04.256 --rc geninfo_all_blocks=1 00:40:04.256 --rc geninfo_unexecuted_blocks=1 00:40:04.256 00:40:04.256 ' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:04.256 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:04.257 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:06.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:06.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:06.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:06.154 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:06.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:06.155 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:06.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:06.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:40:06.413 00:40:06.413 --- 10.0.0.2 ping statistics --- 00:40:06.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:06.413 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:06.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:06.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:40:06.413 00:40:06.413 --- 10.0.0.1 ping statistics --- 00:40:06.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:06.413 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:06.413 only one NIC for nvmf test 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.413 rmmod nvme_tcp 00:40:06.413 rmmod nvme_fabrics 00:40:06.413 rmmod nvme_keyring 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:06.413 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:06.414 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:06.414 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:06.414 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:06.414 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.414 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:06.414 09:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.317 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:08.577 00:40:08.577 real 0m4.327s 00:40:08.577 user 0m0.852s 00:40:08.577 sys 0m1.462s 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:08.577 ************************************ 00:40:08.577 END TEST nvmf_target_multipath 00:40:08.577 ************************************ 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:08.577 ************************************ 00:40:08.577 START TEST nvmf_zcopy 00:40:08.577 ************************************ 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:08.577 * Looking for test storage... 00:40:08.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:08.577 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:08.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.578 --rc genhtml_branch_coverage=1 00:40:08.578 --rc genhtml_function_coverage=1 00:40:08.578 --rc genhtml_legend=1 00:40:08.578 --rc geninfo_all_blocks=1 00:40:08.578 --rc geninfo_unexecuted_blocks=1 00:40:08.578 00:40:08.578 ' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:08.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.578 --rc genhtml_branch_coverage=1 00:40:08.578 --rc genhtml_function_coverage=1 00:40:08.578 --rc genhtml_legend=1 00:40:08.578 --rc geninfo_all_blocks=1 00:40:08.578 --rc geninfo_unexecuted_blocks=1 00:40:08.578 00:40:08.578 ' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:08.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.578 --rc genhtml_branch_coverage=1 00:40:08.578 --rc genhtml_function_coverage=1 00:40:08.578 --rc genhtml_legend=1 00:40:08.578 --rc geninfo_all_blocks=1 00:40:08.578 --rc geninfo_unexecuted_blocks=1 00:40:08.578 00:40:08.578 ' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:08.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:08.578 --rc genhtml_branch_coverage=1 00:40:08.578 --rc genhtml_function_coverage=1 00:40:08.578 --rc genhtml_legend=1 00:40:08.578 --rc geninfo_all_blocks=1 00:40:08.578 --rc geninfo_unexecuted_blocks=1 00:40:08.578 00:40:08.578 ' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:08.578 09:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:11.110 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:11.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:11.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:11.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:11.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:11.111 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:11.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:11.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:40:11.111 00:40:11.111 --- 10.0.0.2 ping statistics --- 00:40:11.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.111 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:11.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:11.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:40:11.112 00:40:11.112 --- 10.0.0.1 ping statistics --- 00:40:11.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.112 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3180652 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3180652 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3180652 ']' 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:11.112 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.112 [2024-11-17 09:40:15.843708] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:11.112 [2024-11-17 09:40:15.846041] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:11.112 [2024-11-17 09:40:15.846136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:11.112 [2024-11-17 09:40:16.003673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.371 [2024-11-17 09:40:16.140726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:11.371 [2024-11-17 09:40:16.140813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:11.371 [2024-11-17 09:40:16.140844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:11.371 [2024-11-17 09:40:16.140866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:11.371 [2024-11-17 09:40:16.140890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:11.371 [2024-11-17 09:40:16.142568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:11.630 [2024-11-17 09:40:16.516099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:11.630 [2024-11-17 09:40:16.516577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.889 [2024-11-17 09:40:16.875683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:11.889 [2024-11-17 09:40:16.891842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.889 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:12.147 malloc0 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:12.147 { 00:40:12.147 "params": { 00:40:12.147 "name": "Nvme$subsystem", 00:40:12.147 "trtype": "$TEST_TRANSPORT", 00:40:12.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:12.147 "adrfam": "ipv4", 00:40:12.147 "trsvcid": "$NVMF_PORT", 00:40:12.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:12.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:12.147 "hdgst": ${hdgst:-false}, 00:40:12.147 "ddgst": ${ddgst:-false} 00:40:12.147 }, 00:40:12.147 "method": "bdev_nvme_attach_controller" 00:40:12.147 } 00:40:12.147 EOF 00:40:12.147 )") 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:12.147 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:12.147 "params": { 00:40:12.148 "name": "Nvme1", 00:40:12.148 "trtype": "tcp", 00:40:12.148 "traddr": "10.0.0.2", 00:40:12.148 "adrfam": "ipv4", 00:40:12.148 "trsvcid": "4420", 00:40:12.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:12.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:12.148 "hdgst": false, 00:40:12.148 "ddgst": false 00:40:12.148 }, 00:40:12.148 "method": "bdev_nvme_attach_controller" 00:40:12.148 }' 00:40:12.148 [2024-11-17 09:40:17.036526] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:12.148 [2024-11-17 09:40:17.036653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180805 ] 00:40:12.405 [2024-11-17 09:40:17.184292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.405 [2024-11-17 09:40:17.319471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.971 Running I/O for 10 seconds... 00:40:14.839 3945.00 IOPS, 30.82 MiB/s [2024-11-17T08:40:20.786Z] 4049.00 IOPS, 31.63 MiB/s [2024-11-17T08:40:22.161Z] 4056.00 IOPS, 31.69 MiB/s [2024-11-17T08:40:23.095Z] 4048.25 IOPS, 31.63 MiB/s [2024-11-17T08:40:24.030Z] 4054.20 IOPS, 31.67 MiB/s [2024-11-17T08:40:25.053Z] 4050.83 IOPS, 31.65 MiB/s [2024-11-17T08:40:25.988Z] 4052.86 IOPS, 31.66 MiB/s [2024-11-17T08:40:26.922Z] 4054.12 IOPS, 31.67 MiB/s [2024-11-17T08:40:27.856Z] 4051.78 IOPS, 31.65 MiB/s [2024-11-17T08:40:27.856Z] 4054.60 IOPS, 31.68 MiB/s 00:40:22.843 Latency(us) 00:40:22.843 [2024-11-17T08:40:27.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:22.843 Verification LBA range: start 0x0 length 0x1000 00:40:22.843 Nvme1n1 : 10.02 4055.47 31.68 0.00 0.00 31471.79 1322.86 42137.22 00:40:22.843 [2024-11-17T08:40:27.856Z] =================================================================================================================== 00:40:22.843 [2024-11-17T08:40:27.856Z] Total : 4055.47 31.68 0.00 0.00 31471.79 1322.86 42137.22 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3182231 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:23.776 { 00:40:23.776 "params": { 00:40:23.776 "name": "Nvme$subsystem", 00:40:23.776 "trtype": "$TEST_TRANSPORT", 00:40:23.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.776 "adrfam": "ipv4", 00:40:23.776 "trsvcid": "$NVMF_PORT", 00:40:23.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.776 "hdgst": ${hdgst:-false}, 00:40:23.776 "ddgst": ${ddgst:-false} 00:40:23.776 }, 00:40:23.776 "method": "bdev_nvme_attach_controller" 00:40:23.776 } 00:40:23.776 EOF 00:40:23.776 )") 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:23.776 [2024-11-17 09:40:28.723604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.723669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:23.776 09:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:23.776 "params": { 00:40:23.776 "name": "Nvme1", 00:40:23.776 "trtype": "tcp", 00:40:23.776 "traddr": "10.0.0.2", 00:40:23.776 "adrfam": "ipv4", 00:40:23.776 "trsvcid": "4420", 00:40:23.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:23.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:23.776 "hdgst": false, 00:40:23.776 "ddgst": false 00:40:23.776 }, 00:40:23.776 "method": "bdev_nvme_attach_controller" 00:40:23.776 }' 00:40:23.776 [2024-11-17 09:40:28.731484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.731520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 [2024-11-17 09:40:28.739453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.739483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 [2024-11-17 09:40:28.747466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.747497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 [2024-11-17 09:40:28.755483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.755514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 [2024-11-17 09:40:28.763462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.763498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 [2024-11-17 09:40:28.771446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.776 [2024-11-17 09:40:28.771474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.776 [2024-11-17 09:40:28.779445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.777 [2024-11-17 09:40:28.779474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.787447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.787477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.795475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.795506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.803452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.803481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.803962] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:24.035 [2024-11-17 09:40:28.804082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182231 ] 00:40:24.035 [2024-11-17 09:40:28.811462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.811491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.819471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.819501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.827468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.827507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.835488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.835530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.843477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.843506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.851481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.851510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.859465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.859495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.867439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.867469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.875456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.875485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.883461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.883491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.891460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.891490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.899467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.899496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.907450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.907480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.915444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.915473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.923454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.923484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.931459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.931488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.939463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.939493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.947481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.947510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.950086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.035 [2024-11-17 09:40:28.959467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.959498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.967551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.967602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.975542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.975596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.983452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.983500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.991472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.991503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:28.999436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:28.999466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.035 [2024-11-17 09:40:29.007470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.035 [2024-11-17 09:40:29.007500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.036 [2024-11-17 09:40:29.015449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.036 [2024-11-17 09:40:29.015479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.036 [2024-11-17 09:40:29.023438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.036 [2024-11-17 09:40:29.023469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.036 [2024-11-17 09:40:29.031460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.036 [2024-11-17 09:40:29.031489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.036 [2024-11-17 09:40:29.039451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.036 [2024-11-17 09:40:29.039480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.047453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.047482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.055465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.055495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.063459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.063488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.071466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.071496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.078945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.294 [2024-11-17 09:40:29.079464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.079493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.087446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.087475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.095542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.095591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.103587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.103640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.111451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.111481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.119467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.119502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.127450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.127486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.135464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.135493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.143465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.143494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.151436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.151466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.159472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.159505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.167565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.167621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.175552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.175605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.183583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.183660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.191553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.191611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.199467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.199504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.207467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.207505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.215438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.215467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.223455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.223485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.231449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.231491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.239479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.239511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.247470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.247500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.255452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.255482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.294 [2024-11-17 09:40:29.263453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.294 [2024-11-17 09:40:29.263483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.295 [2024-11-17 09:40:29.271454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.295 [2024-11-17 09:40:29.271484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.295 [2024-11-17 09:40:29.279432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.295 [2024-11-17 09:40:29.279468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.295 [2024-11-17 09:40:29.287451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.295 [2024-11-17 09:40:29.287481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.295 [2024-11-17 09:40:29.295485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.295 [2024-11-17 09:40:29.295514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.295 [2024-11-17 09:40:29.303443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.295 [2024-11-17 09:40:29.303475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.552 [2024-11-17 09:40:29.311556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.552 [2024-11-17 09:40:29.311609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.552 [2024-11-17 09:40:29.319537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.552 [2024-11-17 09:40:29.319594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.552 [2024-11-17 09:40:29.327587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.552 [2024-11-17 09:40:29.327660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.552 [2024-11-17 09:40:29.335485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.552 [2024-11-17 09:40:29.335515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.343451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.343480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.351474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.351504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.359460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.359489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.367439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.367468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.375464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.375493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.383441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.383470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.391466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.391496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.399465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.399494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.407458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.407488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.415466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.415496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.423461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.423495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.431454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.431494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.439470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.439504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.447441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.447474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.455455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.455487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.463459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.463493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.471475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.471516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.479457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.479489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.487650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.487686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.495502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.495545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.503475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.503510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 Running I/O for 5 seconds... 00:40:24.553 [2024-11-17 09:40:29.524789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.524830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.540574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.540612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.553 [2024-11-17 09:40:29.556077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.553 [2024-11-17 09:40:29.556113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.571264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.571302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.588237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.588274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.603661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.603711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.619504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.619542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.635305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.635344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.652021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.652058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.668549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.668586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.684822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.684857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.701042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.701077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.717792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.717829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.733820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.733855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.750523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.750562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.766385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.766423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.782224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.782261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.798078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.798114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:24.811 [2024-11-17 09:40:29.813093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:24.811 [2024-11-17 09:40:29.813127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.829406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.829460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.845659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.845695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.860565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.860602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.877610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.877649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.894194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.894234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.909581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.909619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.925588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.925626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.941950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.941986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.957323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.957381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.973442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.973479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:29.988546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:29.988584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:30.004052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:30.004094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:30.020089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:30.020134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:30.036544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:30.036586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:30.052437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:30.052490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.070 [2024-11-17 09:40:30.069213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.070 [2024-11-17 09:40:30.069264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.085074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.085112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.101577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.101619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.117655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.117708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.133621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.133660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.149315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.149378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.164891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.164926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.328 [2024-11-17 09:40:30.180235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.328 [2024-11-17 09:40:30.180272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.195959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.195996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.211593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.211632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.226665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.226702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.242467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.242505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.258150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.258187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.274318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.274363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.291497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.291532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.308760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.308794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.329 [2024-11-17 09:40:30.325961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.329 [2024-11-17 09:40:30.326001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.343302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.343343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.359930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.359970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.377025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.377064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.394184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.394223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.411409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.411444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.427460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.427495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.444247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.444286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.461327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.461379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.478545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.478578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.495247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.495290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 7838.00 IOPS, 61.23 MiB/s [2024-11-17T08:40:30.600Z] [2024-11-17 09:40:30.511673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.511713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.528969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.529010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.544974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.545013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.563617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.563671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.587 [2024-11-17 09:40:30.581210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.587 [2024-11-17 09:40:30.581259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.598327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.598380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.614707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.614762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.631751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.631790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.649579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.649613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.667352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.667427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.684520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.684556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.700889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.700931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.717288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.717327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.735045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.735085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.752178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.752219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.769994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.770036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.786151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.786191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.803870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.803911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.821379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.821436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.836934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.836973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:25.846 [2024-11-17 09:40:30.855538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:25.846 [2024-11-17 09:40:30.855573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.872040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.872079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.888328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.888376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.904813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.904861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.920141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.920181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.937760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.937799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.954962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.955002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.971512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.971546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:30.988042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:30.988082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.004009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.004048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.020777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.020817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.038281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.038321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.055129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.055169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.072277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.072317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.088451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.088486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.104 [2024-11-17 09:40:31.105079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.104 [2024-11-17 09:40:31.105119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.121537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.121574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.137325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.137365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.154125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.154165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.170180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.170219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.186002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.186042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.200866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.200906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.219558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.219599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.237429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.237464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.254766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.254805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.271258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.271300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.288054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.288093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.304772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.304812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.320853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.320892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.336963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.337003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.353023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.353063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.363 [2024-11-17 09:40:31.370424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.363 [2024-11-17 09:40:31.370458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.386689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.386729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.404303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.404343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.421012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.421052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.437425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.437460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.453489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.453522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.469866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.469905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.487331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.487383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.504183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.504222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 7693.50 IOPS, 60.11 MiB/s [2024-11-17T08:40:31.634Z] [2024-11-17 09:40:31.520064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.520103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.537212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.537252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.554886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.554931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.573295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.573336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.590643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.590699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.607059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.607096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.621 [2024-11-17 09:40:31.623236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.621 [2024-11-17 09:40:31.623277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.640105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.640146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.657203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.657243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.674080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.674120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.690562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.690596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.707401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.707454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.723932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.723972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.740424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.740456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.756439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.756473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.773471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.773505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.790260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.790300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.807850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.807890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.825307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.825348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.842378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.842434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.859156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.859196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.880 [2024-11-17 09:40:31.876506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.880 [2024-11-17 09:40:31.876543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.893941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.893996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.911562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.911599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.929843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.929885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.946303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.946344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.963993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.964032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.981321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.981360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:31.998378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:31.998430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.013880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.013912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.030428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.030463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.047338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.047388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.063504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.063539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.080242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.080282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.096424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.096458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.111774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.111814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.129559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.129592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.139 [2024-11-17 09:40:32.146270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.139 [2024-11-17 09:40:32.146309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.163120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.163160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.180138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.180177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.197735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.197775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.215261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.215305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.232377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.232429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.248755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.248795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.265051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.265090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.281560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.281595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.298578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.298611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.314712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.314752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.331861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.331900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.349047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.349087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.365882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.365921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.382527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.382560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.398 [2024-11-17 09:40:32.398551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.398 [2024-11-17 09:40:32.398587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.414858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.414898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.432746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.432785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.448763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.448803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.464986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.465025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.482940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.482988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.500588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.500621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 7649.67 IOPS, 59.76 MiB/s [2024-11-17T08:40:32.669Z] [2024-11-17 09:40:32.517497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.517530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.533868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.533908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.550362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.550423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.566386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.566439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.584343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.584394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.601443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.601476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.618315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.618355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.634673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.634713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.656 [2024-11-17 09:40:32.651275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.656 [2024-11-17 09:40:32.651318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.914 [2024-11-17 09:40:32.668442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.914 [2024-11-17 09:40:32.668476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.914 [2024-11-17 09:40:32.685766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.914 [2024-11-17 09:40:32.685806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.914 [2024-11-17 09:40:32.703425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.914 [2024-11-17 09:40:32.703476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.721069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.721109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.737663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.737696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.754524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.754557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.771361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.771409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.787570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.787604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.804995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.805043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.822362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.822430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.839228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.839268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.856314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.856354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.872882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.872921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.890391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.890443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.906822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.906862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.915 [2024-11-17 09:40:32.924452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.915 [2024-11-17 09:40:32.924486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.173 [2024-11-17 09:40:32.940576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.173 [2024-11-17 09:40:32.940610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.173 [2024-11-17 09:40:32.956526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.173 [2024-11-17 09:40:32.956561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.173 [2024-11-17 09:40:32.973442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.173 [2024-11-17 09:40:32.973476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.173 [2024-11-17 09:40:32.990154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.173 [2024-11-17 09:40:32.990194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.173 [2024-11-17 09:40:33.006703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.173 [2024-11-17 09:40:33.006756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.023219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.023259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.040422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.040458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.054900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.054941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.073275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.073315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.090173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.090214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.106945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.106984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.124351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.124423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.140292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.140332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.157837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.157877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.174 [2024-11-17 09:40:33.173597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.174 [2024-11-17 09:40:33.173631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.188840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.188879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.206863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.206903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.224125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.224166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.241209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.241249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.257950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.257989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.275299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.275338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.293170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.293211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.311221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.311261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.329246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.329286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.346296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.346336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.362940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.362980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.379798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.379839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.396913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.396953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.414280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.414321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.432 [2024-11-17 09:40:33.430624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.432 [2024-11-17 09:40:33.430657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.447520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.447565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.465057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.465097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.482015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.482056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.498685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.498735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.515910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.515950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 7616.50 IOPS, 59.50 MiB/s [2024-11-17T08:40:33.704Z] [2024-11-17 09:40:33.532837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.532877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.550204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.550245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.567794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.567835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.584796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.584836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.601426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.601462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.616694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.616741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.632078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.632118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.649276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.649316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.666993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.667033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.683502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.683537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.691 [2024-11-17 09:40:33.699899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.691 [2024-11-17 09:40:33.699934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.716879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.716921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.732509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.732543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.749646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.749704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.766111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.766151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.783690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.783730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.801527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.801560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.818688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.818721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.834530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.834564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.850044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.850084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.868293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.868334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.884895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.884936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.900965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.901004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.916121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.916160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.934679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.934726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.950 [2024-11-17 09:40:33.952315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.950 [2024-11-17 09:40:33.952355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:33.968823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:33.968863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:33.985257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:33.985296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.001326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.001376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.018223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.018263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.035082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.035122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.051336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.051385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.067552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.067587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.084639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.084698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.101478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.101512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.119481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.119518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.136916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.136957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.154419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.154455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.172014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.172054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.190294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.190334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.208 [2024-11-17 09:40:34.207589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.208 [2024-11-17 09:40:34.207626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.225206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.225246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.243226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.243267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.260335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.260393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.278024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.278064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.295114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.295153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.311709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.311763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.328468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.328501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.344816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.344855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.361309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.361349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.377984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.378024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.394832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.394881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.411141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.411180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.427510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.427543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.444352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.444401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.461484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.461518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.467 [2024-11-17 09:40:34.477446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.467 [2024-11-17 09:40:34.477480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.725 [2024-11-17 09:40:34.493463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.725 [2024-11-17 09:40:34.493498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.725 [2024-11-17 09:40:34.508824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.725 [2024-11-17 09:40:34.508863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.725 7608.40 IOPS, 59.44 MiB/s [2024-11-17T08:40:34.738Z] [2024-11-17 09:40:34.524672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.524726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.533895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.533930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 00:40:29.726 Latency(us) 00:40:29.726 [2024-11-17T08:40:34.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.726 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:29.726 Nvme1n1 : 5.02 7608.17 59.44 0.00 0.00 16792.10 4004.98 26991.12 00:40:29.726 [2024-11-17T08:40:34.739Z] =================================================================================================================== 00:40:29.726 [2024-11-17T08:40:34.739Z] Total : 7608.17 59.44 0.00 0.00 16792.10 4004.98 26991.12 00:40:29.726 [2024-11-17 09:40:34.539473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.539505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.547459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.547490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.555477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.555513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.563458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.563486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.571474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.571503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.579482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.579511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.587558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.587630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.595604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.595686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.603478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.603506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.611451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.611479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.619490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.619519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.627468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.627497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.635471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.635500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.643473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.643501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.651456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.651484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.659471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.659501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.667466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.667494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.675475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.675517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.683595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.683660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.691548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.691604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.699475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.699503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.707479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.707508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.715479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.715507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.723474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.723502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.726 [2024-11-17 09:40:34.731477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.726 [2024-11-17 09:40:34.731506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.739454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.739490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.747469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.747497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.755453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.755481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.763474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.763502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.771468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.771496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.779452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.779479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.787468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.787496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.795469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.795497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.803455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.803484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.811486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.811515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.819463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.819491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.827476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.827504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.835469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.835499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.843548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.843609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.851585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.851640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.859468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.859496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.867461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.867489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.875480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.875511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.883462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.883492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.891472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.891501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.899479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.899512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.907602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.907667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.985 [2024-11-17 09:40:34.915625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.985 [2024-11-17 09:40:34.915693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.923612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.923676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.931457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.931485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.939475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.939502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.947445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.947476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.955467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.955495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.963450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.963478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.971439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.971466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.979449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.979476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.987467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.987495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.986 [2024-11-17 09:40:34.995451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.986 [2024-11-17 09:40:34.995479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.003485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.003514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.011454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.011483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.019502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.019531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.027473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.027501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.035455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.035483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.043469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.043497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.051467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.051495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.059453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.059481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.067517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.067546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.075491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.075536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.083573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.083635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.091490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.091520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.099477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.099507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.107476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.107505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.115478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.115506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.123467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.123495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.131475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.131507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.139457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.139487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.147473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.147503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.155475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.155502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.163483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.163512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.171467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.171495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.179467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.179495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.187481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.187508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.195490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.195519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.203580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.203641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.211498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.211528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.219479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.219507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.227460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.227489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.235483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.235511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.243474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.243503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.245 [2024-11-17 09:40:35.251466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.245 [2024-11-17 09:40:35.251496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.259476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.259505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.275475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.275504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.283467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.283498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.291499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.291538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.299568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.299622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.307479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.307509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.315484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.315513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.323459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.323490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.331479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.331517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.339464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.339493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.347475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.347513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.355483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.355513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.363466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.363495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.371489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.371529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.379472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.379509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.504 [2024-11-17 09:40:35.387483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.504 [2024-11-17 09:40:35.387513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 [2024-11-17 09:40:35.395506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.505 [2024-11-17 09:40:35.395540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 [2024-11-17 09:40:35.403461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.505 [2024-11-17 09:40:35.403497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 [2024-11-17 09:40:35.411470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.505 [2024-11-17 09:40:35.411498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 [2024-11-17 09:40:35.419478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.505 [2024-11-17 09:40:35.419505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 [2024-11-17 09:40:35.427464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.505 [2024-11-17 09:40:35.427492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 [2024-11-17 09:40:35.435475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.505 [2024-11-17 09:40:35.435502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3182231) - No such process 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3182231 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:30.505 delay0 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.505 09:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:30.763 [2024-11-17 09:40:35.672005] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:38.872 Initializing NVMe Controllers 00:40:38.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:38.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:38.872 Initialization complete. Launching workers. 00:40:38.872 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 224, failed: 18091 00:40:38.872 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18163, failed to submit 152 00:40:38.872 success 18105, unsuccessful 58, failed 0 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.872 rmmod nvme_tcp 00:40:38.872 rmmod nvme_fabrics 00:40:38.872 rmmod nvme_keyring 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3180652 ']' 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3180652 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3180652 ']' 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3180652 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3180652 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3180652' 00:40:38.872 killing process with pid 3180652 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3180652 00:40:38.872 09:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3180652 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:39.131 09:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:41.665 00:40:41.665 real 0m32.747s 00:40:41.665 user 0m46.685s 00:40:41.665 sys 0m10.137s 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:41.665 ************************************ 00:40:41.665 END TEST nvmf_zcopy 00:40:41.665 ************************************ 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:41.665 ************************************ 00:40:41.665 START TEST nvmf_nmic 00:40:41.665 ************************************ 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:41.665 * Looking for test storage... 00:40:41.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.665 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.666 --rc genhtml_branch_coverage=1 00:40:41.666 --rc genhtml_function_coverage=1 00:40:41.666 --rc genhtml_legend=1 00:40:41.666 --rc geninfo_all_blocks=1 00:40:41.666 --rc geninfo_unexecuted_blocks=1 00:40:41.666 00:40:41.666 ' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.666 --rc genhtml_branch_coverage=1 00:40:41.666 --rc genhtml_function_coverage=1 00:40:41.666 --rc genhtml_legend=1 00:40:41.666 --rc geninfo_all_blocks=1 00:40:41.666 --rc geninfo_unexecuted_blocks=1 00:40:41.666 00:40:41.666 ' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.666 --rc genhtml_branch_coverage=1 00:40:41.666 --rc genhtml_function_coverage=1 00:40:41.666 --rc genhtml_legend=1 00:40:41.666 --rc geninfo_all_blocks=1 00:40:41.666 --rc geninfo_unexecuted_blocks=1 00:40:41.666 00:40:41.666 ' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.666 --rc genhtml_branch_coverage=1 00:40:41.666 --rc genhtml_function_coverage=1 00:40:41.666 --rc genhtml_legend=1 00:40:41.666 --rc geninfo_all_blocks=1 00:40:41.666 --rc geninfo_unexecuted_blocks=1 00:40:41.666 00:40:41.666 ' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:41.666 09:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:43.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:43.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:43.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:43.570 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:43.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:43.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:43.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:40:43.571 00:40:43.571 --- 10.0.0.2 ping statistics --- 00:40:43.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.571 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:43.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:43.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:40:43.571 00:40:43.571 --- 10.0.0.1 ping statistics --- 00:40:43.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.571 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3185870 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3185870 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3185870 ']' 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:43.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:43.571 09:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:43.830 [2024-11-17 09:40:48.604869] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:43.830 [2024-11-17 09:40:48.607443] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:43.830 [2024-11-17 09:40:48.607537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:43.830 [2024-11-17 09:40:48.760979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:44.089 [2024-11-17 09:40:48.903178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:44.089 [2024-11-17 09:40:48.903256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:44.089 [2024-11-17 09:40:48.903285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:44.089 [2024-11-17 09:40:48.903307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:44.089 [2024-11-17 09:40:48.903329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:44.089 [2024-11-17 09:40:48.906053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:44.089 [2024-11-17 09:40:48.906121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:44.089 [2024-11-17 09:40:48.906204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.089 [2024-11-17 09:40:48.906214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:44.350 [2024-11-17 09:40:49.272155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:44.350 [2024-11-17 09:40:49.286684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:44.350 [2024-11-17 09:40:49.286862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:44.350 [2024-11-17 09:40:49.287701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:44.350 [2024-11-17 09:40:49.288047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.609 [2024-11-17 09:40:49.583277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.609 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 Malloc0 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 [2024-11-17 09:40:49.699453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:44.868 test case1: single bdev can't be used in multiple subsystems 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 [2024-11-17 09:40:49.723098] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:44.868 [2024-11-17 09:40:49.723160] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:44.868 [2024-11-17 09:40:49.723185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:44.868 request: 00:40:44.868 { 00:40:44.868 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:44.868 "namespace": { 00:40:44.868 "bdev_name": "Malloc0", 00:40:44.868 "no_auto_visible": false 00:40:44.868 }, 00:40:44.868 "method": "nvmf_subsystem_add_ns", 00:40:44.868 "req_id": 1 00:40:44.868 } 00:40:44.868 Got JSON-RPC error response 00:40:44.868 response: 00:40:44.868 { 00:40:44.868 "code": -32602, 00:40:44.868 "message": "Invalid parameters" 00:40:44.868 } 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:44.868 Adding namespace failed - expected result. 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:44.868 test case2: host connect to nvmf target in multiple paths 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:44.868 [2024-11-17 09:40:49.731201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.868 09:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:45.126 09:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:45.383 09:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:45.383 09:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:45.383 09:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:45.383 09:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:45.383 09:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:47.910 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:47.910 [global] 00:40:47.910 thread=1 00:40:47.910 invalidate=1 00:40:47.910 rw=write 00:40:47.910 time_based=1 00:40:47.910 runtime=1 00:40:47.910 ioengine=libaio 00:40:47.910 direct=1 00:40:47.910 bs=4096 00:40:47.910 iodepth=1 00:40:47.910 norandommap=0 00:40:47.910 numjobs=1 00:40:47.910 00:40:47.910 verify_dump=1 00:40:47.910 verify_backlog=512 00:40:47.910 verify_state_save=0 00:40:47.910 do_verify=1 00:40:47.910 verify=crc32c-intel 00:40:47.910 [job0] 00:40:47.910 filename=/dev/nvme0n1 00:40:47.910 Could not set queue depth (nvme0n1) 00:40:47.910 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:47.910 fio-3.35 00:40:47.910 Starting 1 thread 00:40:48.845 00:40:48.845 job0: (groupid=0, jobs=1): err= 0: pid=3186500: Sun Nov 17 09:40:53 2024 00:40:48.845 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:40:48.845 slat (nsec): min=7182, max=34785, avg=26133.00, stdev=9683.12 00:40:48.845 clat (usec): min=40871, max=41072, avg=40961.84, stdev=61.16 00:40:48.845 lat (usec): min=40906, max=41100, avg=40987.97, stdev=58.46 00:40:48.845 clat percentiles (usec): 00:40:48.845 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:48.845 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:48.845 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:48.845 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:48.845 | 99.99th=[41157] 00:40:48.845 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:40:48.845 slat (nsec): min=7104, max=70259, avg=19067.58, stdev=9673.06 00:40:48.845 clat (usec): min=182, max=444, avg=228.38, stdev=27.95 00:40:48.845 lat (usec): min=191, max=470, avg=247.45, stdev=31.36 00:40:48.845 clat percentiles (usec): 00:40:48.845 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 217], 00:40:48.845 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 227], 00:40:48.845 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 262], 00:40:48.845 | 99.00th=[ 347], 99.50th=[ 416], 99.90th=[ 445], 99.95th=[ 445], 00:40:48.845 | 99.99th=[ 445] 00:40:48.845 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:48.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:48.845 lat (usec) : 250=86.70%, 500=9.18% 00:40:48.845 lat (msec) : 50=4.12% 00:40:48.845 cpu : usr=0.78%, sys=1.07%, ctx=534, majf=0, minf=1 00:40:48.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.845 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:48.845 00:40:48.845 Run status group 0 (all jobs): 00:40:48.845 READ: bw=85.4KiB/s (87.4kB/s), 85.4KiB/s-85.4KiB/s (87.4kB/s-87.4kB/s), io=88.0KiB (90.1kB), run=1031-1031msec 00:40:48.845 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:40:48.845 00:40:48.845 Disk stats (read/write): 00:40:48.845 nvme0n1: ios=68/512, merge=0/0, ticks=943/87, in_queue=1030, util=96.49% 00:40:48.845 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:49.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:49.104 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:49.104 rmmod nvme_tcp 00:40:49.104 rmmod nvme_fabrics 00:40:49.104 rmmod nvme_keyring 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3185870 ']' 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3185870 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3185870 ']' 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3185870 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3185870 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3185870' 00:40:49.104 killing process with pid 3185870 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3185870 00:40:49.104 09:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3185870 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.481 09:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:53.041 00:40:53.041 real 0m11.287s 00:40:53.041 user 0m19.833s 00:40:53.041 sys 0m3.514s 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:53.041 ************************************ 00:40:53.041 END TEST nvmf_nmic 00:40:53.041 ************************************ 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:53.041 ************************************ 00:40:53.041 START TEST nvmf_fio_target 00:40:53.041 ************************************ 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:53.041 * Looking for test storage... 00:40:53.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.041 --rc genhtml_branch_coverage=1 00:40:53.041 --rc genhtml_function_coverage=1 00:40:53.041 --rc genhtml_legend=1 00:40:53.041 --rc geninfo_all_blocks=1 00:40:53.041 --rc geninfo_unexecuted_blocks=1 00:40:53.041 00:40:53.041 ' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.041 --rc genhtml_branch_coverage=1 00:40:53.041 --rc genhtml_function_coverage=1 00:40:53.041 --rc genhtml_legend=1 00:40:53.041 --rc geninfo_all_blocks=1 00:40:53.041 --rc geninfo_unexecuted_blocks=1 00:40:53.041 00:40:53.041 ' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.041 --rc genhtml_branch_coverage=1 00:40:53.041 --rc genhtml_function_coverage=1 00:40:53.041 --rc genhtml_legend=1 00:40:53.041 --rc geninfo_all_blocks=1 00:40:53.041 --rc geninfo_unexecuted_blocks=1 00:40:53.041 00:40:53.041 ' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.041 --rc genhtml_branch_coverage=1 00:40:53.041 --rc genhtml_function_coverage=1 00:40:53.041 --rc genhtml_legend=1 00:40:53.041 --rc geninfo_all_blocks=1 00:40:53.041 --rc geninfo_unexecuted_blocks=1 00:40:53.041 00:40:53.041 ' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:53.041 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:53.042 09:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:54.943 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:54.943 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:54.943 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:54.943 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:54.943 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:54.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:54.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:54.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:54.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:54.944 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:54.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:54.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:40:54.944 00:40:54.944 --- 10.0.0.2 ping statistics --- 00:40:54.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:54.945 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:54.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:54.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:40:54.945 00:40:54.945 --- 10.0.0.1 ping statistics --- 00:40:54.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:54.945 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3188710 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3188710 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3188710 ']' 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.945 09:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:55.203 [2024-11-17 09:41:00.016037] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:55.203 [2024-11-17 09:41:00.019019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:40:55.203 [2024-11-17 09:41:00.019131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:55.203 [2024-11-17 09:41:00.167942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:55.461 [2024-11-17 09:41:00.307229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:55.461 [2024-11-17 09:41:00.307318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:55.462 [2024-11-17 09:41:00.307348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:55.462 [2024-11-17 09:41:00.307379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:55.462 [2024-11-17 09:41:00.307408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:55.462 [2024-11-17 09:41:00.310255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:55.462 [2024-11-17 09:41:00.310329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:55.462 [2024-11-17 09:41:00.310421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.462 [2024-11-17 09:41:00.310430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:55.720 [2024-11-17 09:41:00.687255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:55.720 [2024-11-17 09:41:00.698724] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:55.720 [2024-11-17 09:41:00.698956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:55.720 [2024-11-17 09:41:00.699803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:55.720 [2024-11-17 09:41:00.700155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:56.287 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:56.545 [2024-11-17 09:41:01.315622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.545 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:56.804 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:56.804 09:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:57.371 09:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:57.371 09:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:57.629 09:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:57.629 09:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:57.887 09:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:57.887 09:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:58.454 09:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:58.713 09:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:58.713 09:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:58.970 09:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:58.970 09:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:59.535 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:59.535 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:59.792 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:00.050 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:00.050 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:00.308 09:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:00.308 09:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:00.565 09:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.130 [2024-11-17 09:41:05.851736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.130 09:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:01.388 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:01.645 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:01.903 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:01.903 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:01.903 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:01.903 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:01.903 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:01.903 09:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:03.800 09:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:03.800 [global] 00:41:03.800 thread=1 00:41:03.800 invalidate=1 00:41:03.800 rw=write 00:41:03.800 time_based=1 00:41:03.800 runtime=1 00:41:03.800 ioengine=libaio 00:41:03.800 direct=1 00:41:03.800 bs=4096 00:41:03.800 iodepth=1 00:41:03.800 norandommap=0 00:41:03.800 numjobs=1 00:41:03.800 00:41:03.800 verify_dump=1 00:41:03.800 verify_backlog=512 00:41:03.800 verify_state_save=0 00:41:03.800 do_verify=1 00:41:03.800 verify=crc32c-intel 00:41:03.800 [job0] 00:41:03.800 filename=/dev/nvme0n1 00:41:03.800 [job1] 00:41:03.800 filename=/dev/nvme0n2 00:41:03.800 [job2] 00:41:03.800 filename=/dev/nvme0n3 00:41:03.800 [job3] 00:41:03.800 filename=/dev/nvme0n4 00:41:03.800 Could not set queue depth (nvme0n1) 00:41:03.800 Could not set queue depth (nvme0n2) 00:41:03.800 Could not set queue depth (nvme0n3) 00:41:03.800 Could not set queue depth (nvme0n4) 00:41:04.058 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.058 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.058 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.058 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.058 fio-3.35 00:41:04.058 Starting 4 threads 00:41:05.432 00:41:05.432 job0: (groupid=0, jobs=1): err= 0: pid=3189909: Sun Nov 17 09:41:10 2024 00:41:05.432 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:41:05.432 slat (nsec): min=7386, max=49899, avg=18497.05, stdev=10206.37 00:41:05.432 clat (usec): min=40905, max=41368, avg=40992.31, stdev=89.17 00:41:05.432 lat (usec): min=40941, max=41375, avg=41010.81, stdev=86.09 00:41:05.432 clat percentiles (usec): 00:41:05.432 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:05.432 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:05.432 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:05.432 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:05.432 | 99.99th=[41157] 00:41:05.432 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:41:05.432 slat (nsec): min=7985, max=24483, avg=9183.01, stdev=2028.61 00:41:05.432 clat (usec): min=179, max=434, avg=214.76, stdev=21.40 00:41:05.432 lat (usec): min=188, max=442, avg=223.95, stdev=21.76 00:41:05.432 clat percentiles (usec): 00:41:05.432 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:41:05.432 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:41:05.432 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:41:05.432 | 99.00th=[ 310], 99.50th=[ 363], 99.90th=[ 433], 99.95th=[ 433], 00:41:05.432 | 99.99th=[ 433] 00:41:05.432 bw ( KiB/s): min= 4096, max= 4096, per=27.88%, avg=4096.00, stdev= 0.00, samples=1 00:41:05.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:05.433 lat (usec) : 250=92.51%, 500=3.37% 00:41:05.433 lat (msec) : 50=4.12% 00:41:05.433 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=1 00:41:05.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:05.433 job1: (groupid=0, jobs=1): err= 0: pid=3189910: Sun Nov 17 09:41:10 2024 00:41:05.433 read: IOPS=938, BW=3755KiB/s (3845kB/s)(3804KiB/1013msec) 00:41:05.433 slat (nsec): min=4567, max=36909, avg=10581.08, stdev=4582.60 00:41:05.433 clat (usec): min=273, max=41943, avg=814.38, stdev=4353.79 00:41:05.433 lat (usec): min=278, max=41979, avg=824.96, stdev=4354.96 00:41:05.433 clat percentiles (usec): 00:41:05.433 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:41:05.433 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 375], 00:41:05.433 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 388], 95.00th=[ 396], 00:41:05.433 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:41:05.433 | 99.99th=[42206] 00:41:05.433 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:41:05.433 slat (nsec): min=5789, max=34598, avg=7161.56, stdev=2059.10 00:41:05.433 clat (usec): min=184, max=518, avg=210.27, stdev=21.86 00:41:05.433 lat (usec): min=190, max=525, avg=217.43, stdev=22.27 00:41:05.433 clat percentiles (usec): 00:41:05.433 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:41:05.433 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:41:05.433 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 233], 00:41:05.433 | 99.00th=[ 277], 99.50th=[ 375], 99.90th=[ 412], 99.95th=[ 519], 00:41:05.433 | 99.99th=[ 519] 00:41:05.433 bw ( KiB/s): min= 8192, max= 8192, per=55.77%, avg=8192.00, stdev= 0.00, samples=1 00:41:05.433 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:05.433 lat (usec) : 250=51.09%, 500=48.15%, 750=0.10% 00:41:05.433 lat (msec) : 2=0.10%, 50=0.56% 00:41:05.433 cpu : usr=0.99%, sys=1.58%, ctx=1978, majf=0, minf=1 00:41:05.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 issued rwts: total=951,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:05.433 job2: (groupid=0, jobs=1): err= 0: pid=3189911: Sun Nov 17 09:41:10 2024 00:41:05.433 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:41:05.433 slat (nsec): min=5502, max=57368, avg=10546.13, stdev=6272.90 00:41:05.433 clat (usec): min=268, max=41417, avg=369.82, stdev=1482.04 00:41:05.433 lat (usec): min=274, max=41434, avg=380.37, stdev=1482.15 00:41:05.433 clat percentiles (usec): 00:41:05.433 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:41:05.433 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:41:05.433 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 396], 00:41:05.433 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41157], 00:41:05.433 | 99.99th=[41157] 00:41:05.433 write: IOPS=1692, BW=6769KiB/s (6932kB/s)(6776KiB/1001msec); 0 zone resets 00:41:05.433 slat (nsec): min=6857, max=58653, avg=11597.65, stdev=6193.97 00:41:05.433 clat (usec): min=187, max=819, avg=227.40, stdev=25.94 00:41:05.433 lat (usec): min=194, max=843, avg=239.00, stdev=28.31 00:41:05.433 clat percentiles (usec): 00:41:05.433 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:41:05.433 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 235], 00:41:05.433 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 258], 00:41:05.433 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 510], 99.95th=[ 824], 00:41:05.433 | 99.99th=[ 824] 00:41:05.433 bw ( KiB/s): min= 8192, max= 8192, per=55.77%, avg=8192.00, stdev= 0.00, samples=1 00:41:05.433 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:05.433 lat (usec) : 250=48.45%, 500=50.71%, 750=0.71%, 1000=0.06% 00:41:05.433 lat (msec) : 50=0.06% 00:41:05.433 cpu : usr=2.20%, sys=5.30%, ctx=3231, majf=0, minf=1 00:41:05.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 issued rwts: total=1536,1694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:05.433 job3: (groupid=0, jobs=1): err= 0: pid=3189912: Sun Nov 17 09:41:10 2024 00:41:05.433 read: IOPS=81, BW=326KiB/s (334kB/s)(328KiB/1007msec) 00:41:05.433 slat (nsec): min=7306, max=43469, avg=18809.65, stdev=8181.29 00:41:05.433 clat (usec): min=304, max=41252, avg=10748.49, stdev=17834.43 00:41:05.433 lat (usec): min=318, max=41266, avg=10767.30, stdev=17833.63 00:41:05.433 clat percentiles (usec): 00:41:05.433 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:41:05.433 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 375], 00:41:05.433 | 70.00th=[ 461], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:05.433 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:05.433 | 99.99th=[41157] 00:41:05.433 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:41:05.433 slat (nsec): min=8283, max=28615, avg=9874.04, stdev=2706.83 00:41:05.433 clat (usec): min=188, max=407, avg=227.49, stdev=19.05 00:41:05.433 lat (usec): min=209, max=431, avg=237.36, stdev=19.45 00:41:05.433 clat percentiles (usec): 00:41:05.433 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 217], 00:41:05.433 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:41:05.433 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:41:05.433 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 408], 99.95th=[ 408], 00:41:05.433 | 99.99th=[ 408] 00:41:05.433 bw ( KiB/s): min= 4096, max= 4096, per=27.88%, avg=4096.00, stdev= 0.00, samples=1 00:41:05.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:05.433 lat (usec) : 250=81.14%, 500=14.98%, 750=0.34% 00:41:05.433 lat (msec) : 50=3.54% 00:41:05.433 cpu : usr=0.10%, sys=1.09%, ctx=595, majf=0, minf=1 00:41:05.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:05.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:05.433 issued rwts: total=82,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:05.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:05.433 00:41:05.433 Run status group 0 (all jobs): 00:41:05.433 READ: bw=9.93MiB/s (10.4MB/s), 86.4KiB/s-6138KiB/s (88.4kB/s-6285kB/s), io=10.1MiB (10.6MB), run=1001-1019msec 00:41:05.433 WRITE: bw=14.3MiB/s (15.0MB/s), 2010KiB/s-6769KiB/s (2058kB/s-6932kB/s), io=14.6MiB (15.3MB), run=1001-1019msec 00:41:05.433 00:41:05.433 Disk stats (read/write): 00:41:05.433 nvme0n1: ios=46/512, merge=0/0, ticks=1684/104, in_queue=1788, util=98.10% 00:41:05.433 nvme0n2: ios=971/1024, merge=0/0, ticks=1599/206, in_queue=1805, util=98.37% 00:41:05.433 nvme0n3: ios=1342/1536, merge=0/0, ticks=458/332, in_queue=790, util=89.03% 00:41:05.433 nvme0n4: ios=74/512, merge=0/0, ticks=1642/114, in_queue=1756, util=98.32% 00:41:05.433 09:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:05.433 [global] 00:41:05.433 thread=1 00:41:05.433 invalidate=1 00:41:05.433 rw=randwrite 00:41:05.433 time_based=1 00:41:05.433 runtime=1 00:41:05.433 ioengine=libaio 00:41:05.433 direct=1 00:41:05.433 bs=4096 00:41:05.433 iodepth=1 00:41:05.433 norandommap=0 00:41:05.433 numjobs=1 00:41:05.433 00:41:05.433 verify_dump=1 00:41:05.433 verify_backlog=512 00:41:05.433 verify_state_save=0 00:41:05.433 do_verify=1 00:41:05.433 verify=crc32c-intel 00:41:05.433 [job0] 00:41:05.433 filename=/dev/nvme0n1 00:41:05.433 [job1] 00:41:05.433 filename=/dev/nvme0n2 00:41:05.433 [job2] 00:41:05.433 filename=/dev/nvme0n3 00:41:05.433 [job3] 00:41:05.433 filename=/dev/nvme0n4 00:41:05.433 Could not set queue depth (nvme0n1) 00:41:05.433 Could not set queue depth (nvme0n2) 00:41:05.433 Could not set queue depth (nvme0n3) 00:41:05.433 Could not set queue depth (nvme0n4) 00:41:05.433 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.433 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.433 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.433 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.433 fio-3.35 00:41:05.433 Starting 4 threads 00:41:06.807 00:41:06.807 job0: (groupid=0, jobs=1): err= 0: pid=3190132: Sun Nov 17 09:41:11 2024 00:41:06.807 read: IOPS=1176, BW=4707KiB/s (4820kB/s)(4712KiB/1001msec) 00:41:06.807 slat (nsec): min=5407, max=26629, avg=7639.21, stdev=2979.33 00:41:06.807 clat (usec): min=252, max=41377, avg=536.14, stdev=2643.00 00:41:06.807 lat (usec): min=258, max=41383, avg=543.78, stdev=2642.93 00:41:06.807 clat percentiles (usec): 00:41:06.807 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:41:06.807 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 334], 60.00th=[ 363], 00:41:06.807 | 70.00th=[ 379], 80.00th=[ 453], 90.00th=[ 506], 95.00th=[ 562], 00:41:06.807 | 99.00th=[ 701], 99.50th=[ 1221], 99.90th=[41157], 99.95th=[41157], 00:41:06.807 | 99.99th=[41157] 00:41:06.807 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:06.807 slat (nsec): min=6786, max=53105, avg=10178.34, stdev=4718.12 00:41:06.807 clat (usec): min=176, max=415, avg=219.27, stdev=37.53 00:41:06.807 lat (usec): min=184, max=457, avg=229.45, stdev=39.05 00:41:06.807 clat percentiles (usec): 00:41:06.807 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:41:06.807 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:41:06.807 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 253], 95.00th=[ 297], 00:41:06.807 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 416], 00:41:06.807 | 99.99th=[ 416] 00:41:06.807 bw ( KiB/s): min= 8192, max= 8192, per=68.73%, avg=8192.00, stdev= 0.00, samples=1 00:41:06.807 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:06.807 lat (usec) : 250=50.55%, 500=44.22%, 750=5.01% 00:41:06.808 lat (msec) : 2=0.04%, 50=0.18% 00:41:06.808 cpu : usr=2.10%, sys=2.70%, ctx=2715, majf=0, minf=1 00:41:06.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 issued rwts: total=1178,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.808 job1: (groupid=0, jobs=1): err= 0: pid=3190133: Sun Nov 17 09:41:11 2024 00:41:06.808 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1010msec) 00:41:06.808 slat (nsec): min=8304, max=18999, avg=13226.29, stdev=1877.39 00:41:06.808 clat (usec): min=391, max=42023, avg=39255.23, stdev=8916.97 00:41:06.808 lat (usec): min=410, max=42036, avg=39268.46, stdev=8915.64 00:41:06.808 clat percentiles (usec): 00:41:06.808 | 1.00th=[ 392], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:06.808 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:06.808 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:06.808 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:06.808 | 99.99th=[42206] 00:41:06.808 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:41:06.808 slat (nsec): min=6749, max=59709, avg=20652.17, stdev=10812.94 00:41:06.808 clat (usec): min=196, max=1121, avg=336.24, stdev=72.59 00:41:06.808 lat (usec): min=206, max=1160, avg=356.89, stdev=75.12 00:41:06.808 clat percentiles (usec): 00:41:06.808 | 1.00th=[ 223], 5.00th=[ 241], 10.00th=[ 260], 20.00th=[ 285], 00:41:06.808 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 347], 00:41:06.808 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 449], 00:41:06.808 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 1123], 99.95th=[ 1123], 00:41:06.808 | 99.99th=[ 1123] 00:41:06.808 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:41:06.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:06.808 lat (usec) : 250=8.82%, 500=86.30%, 750=0.94% 00:41:06.808 lat (msec) : 2=0.19%, 50=3.75% 00:41:06.808 cpu : usr=0.59%, sys=0.99%, ctx=533, majf=0, minf=2 00:41:06.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.808 job2: (groupid=0, jobs=1): err= 0: pid=3190134: Sun Nov 17 09:41:11 2024 00:41:06.808 read: IOPS=19, BW=79.5KiB/s (81.4kB/s)(80.0KiB/1006msec) 00:41:06.808 slat (nsec): min=8443, max=27525, avg=13423.20, stdev=3570.10 00:41:06.808 clat (usec): min=40775, max=41098, avg=40974.09, stdev=70.58 00:41:06.808 lat (usec): min=40783, max=41110, avg=40987.51, stdev=71.98 00:41:06.808 clat percentiles (usec): 00:41:06.808 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:06.808 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:06.808 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:06.808 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:06.808 | 99.99th=[41157] 00:41:06.808 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:41:06.808 slat (nsec): min=8443, max=61657, avg=19102.62, stdev=10480.66 00:41:06.808 clat (usec): min=209, max=1107, avg=339.38, stdev=68.18 00:41:06.808 lat (usec): min=219, max=1116, avg=358.49, stdev=71.32 00:41:06.808 clat percentiles (usec): 00:41:06.808 | 1.00th=[ 235], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 285], 00:41:06.808 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 343], 00:41:06.808 | 70.00th=[ 367], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 441], 00:41:06.808 | 99.00th=[ 465], 99.50th=[ 506], 99.90th=[ 1106], 99.95th=[ 1106], 00:41:06.808 | 99.99th=[ 1106] 00:41:06.808 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:41:06.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:06.808 lat (usec) : 250=3.01%, 500=92.67%, 750=0.38% 00:41:06.808 lat (msec) : 2=0.19%, 50=3.76% 00:41:06.808 cpu : usr=0.70%, sys=1.19%, ctx=532, majf=0, minf=2 00:41:06.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.808 job3: (groupid=0, jobs=1): err= 0: pid=3190135: Sun Nov 17 09:41:11 2024 00:41:06.808 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:41:06.808 slat (nsec): min=8761, max=17107, avg=13781.27, stdev=1614.76 00:41:06.808 clat (usec): min=40909, max=41283, avg=40995.17, stdev=76.21 00:41:06.808 lat (usec): min=40926, max=41292, avg=41008.95, stdev=75.15 00:41:06.808 clat percentiles (usec): 00:41:06.808 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:06.808 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:06.808 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:06.808 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:06.808 | 99.99th=[41157] 00:41:06.808 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:41:06.808 slat (nsec): min=8218, max=58370, avg=15931.68, stdev=7043.45 00:41:06.808 clat (usec): min=200, max=562, avg=231.98, stdev=31.78 00:41:06.808 lat (usec): min=208, max=581, avg=247.91, stdev=34.33 00:41:06.808 clat percentiles (usec): 00:41:06.808 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:41:06.808 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:41:06.808 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 285], 00:41:06.808 | 99.00th=[ 355], 99.50th=[ 453], 99.90th=[ 562], 99.95th=[ 562], 00:41:06.808 | 99.99th=[ 562] 00:41:06.808 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:41:06.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:06.808 lat (usec) : 250=84.83%, 500=10.86%, 750=0.19% 00:41:06.808 lat (msec) : 50=4.12% 00:41:06.808 cpu : usr=0.49%, sys=0.68%, ctx=535, majf=0, minf=1 00:41:06.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.808 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.808 00:41:06.808 Run status group 0 (all jobs): 00:41:06.808 READ: bw=4815KiB/s (4930kB/s), 79.5KiB/s-4707KiB/s (81.4kB/s-4820kB/s), io=4964KiB (5083kB), run=1001-1031msec 00:41:06.808 WRITE: bw=11.6MiB/s (12.2MB/s), 1986KiB/s-6138KiB/s (2034kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:41:06.808 00:41:06.808 Disk stats (read/write): 00:41:06.808 nvme0n1: ios=1115/1536, merge=0/0, ticks=693/331, in_queue=1024, util=96.49% 00:41:06.808 nvme0n2: ios=65/512, merge=0/0, ticks=727/166, in_queue=893, util=90.73% 00:41:06.808 nvme0n3: ios=16/512, merge=0/0, ticks=656/173, in_queue=829, util=88.77% 00:41:06.808 nvme0n4: ios=40/512, merge=0/0, ticks=1613/111, in_queue=1724, util=97.57% 00:41:06.808 09:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:06.808 [global] 00:41:06.808 thread=1 00:41:06.808 invalidate=1 00:41:06.808 rw=write 00:41:06.808 time_based=1 00:41:06.808 runtime=1 00:41:06.808 ioengine=libaio 00:41:06.808 direct=1 00:41:06.808 bs=4096 00:41:06.808 iodepth=128 00:41:06.808 norandommap=0 00:41:06.808 numjobs=1 00:41:06.808 00:41:06.808 verify_dump=1 00:41:06.808 verify_backlog=512 00:41:06.808 verify_state_save=0 00:41:06.808 do_verify=1 00:41:06.808 verify=crc32c-intel 00:41:06.808 [job0] 00:41:06.808 filename=/dev/nvme0n1 00:41:06.808 [job1] 00:41:06.808 filename=/dev/nvme0n2 00:41:06.808 [job2] 00:41:06.808 filename=/dev/nvme0n3 00:41:06.808 [job3] 00:41:06.808 filename=/dev/nvme0n4 00:41:06.808 Could not set queue depth (nvme0n1) 00:41:06.808 Could not set queue depth (nvme0n2) 00:41:06.808 Could not set queue depth (nvme0n3) 00:41:06.808 Could not set queue depth (nvme0n4) 00:41:07.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:07.067 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:07.067 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:07.067 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:07.067 fio-3.35 00:41:07.067 Starting 4 threads 00:41:08.442 00:41:08.442 job0: (groupid=0, jobs=1): err= 0: pid=3190424: Sun Nov 17 09:41:13 2024 00:41:08.442 read: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1013msec) 00:41:08.442 slat (usec): min=2, max=14196, avg=101.29, stdev=619.27 00:41:08.442 clat (usec): min=826, max=38373, avg=14545.26, stdev=5004.12 00:41:08.442 lat (usec): min=9118, max=38383, avg=14646.56, stdev=5024.14 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11076], 20.00th=[11731], 00:41:08.442 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13304], 00:41:08.442 | 70.00th=[13829], 80.00th=[16909], 90.00th=[20055], 95.00th=[22152], 00:41:08.442 | 99.00th=[37487], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:41:08.442 | 99.99th=[38536] 00:41:08.442 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec); 0 zone resets 00:41:08.442 slat (usec): min=3, max=17120, avg=101.24, stdev=683.00 00:41:08.442 clat (usec): min=8101, max=39224, avg=13827.82, stdev=3957.76 00:41:08.442 lat (usec): min=8121, max=39232, avg=13929.06, stdev=4000.55 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[11207], 20.00th=[11863], 00:41:08.442 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:41:08.442 | 70.00th=[13435], 80.00th=[15139], 90.00th=[17957], 95.00th=[19006], 00:41:08.442 | 99.00th=[31851], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:41:08.442 | 99.99th=[39060] 00:41:08.442 bw ( KiB/s): min=16384, max=20480, per=41.57%, avg=18432.00, stdev=2896.31, samples=2 00:41:08.442 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:41:08.442 lat (usec) : 1000=0.01% 00:41:08.442 lat (msec) : 10=3.86%, 20=88.59%, 50=7.54% 00:41:08.442 cpu : usr=6.82%, sys=11.96%, ctx=432, majf=0, minf=1 00:41:08.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:08.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:08.442 issued rwts: total=4433,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:08.442 job1: (groupid=0, jobs=1): err= 0: pid=3190437: Sun Nov 17 09:41:13 2024 00:41:08.442 read: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec) 00:41:08.442 slat (usec): min=3, max=23505, avg=208.06, stdev=1239.17 00:41:08.442 clat (usec): min=12062, max=54041, avg=23718.33, stdev=7411.42 00:41:08.442 lat (usec): min=12083, max=54071, avg=23926.40, stdev=7532.75 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[13960], 5.00th=[16188], 10.00th=[17695], 20.00th=[18220], 00:41:08.442 | 30.00th=[20055], 40.00th=[21365], 50.00th=[22414], 60.00th=[22676], 00:41:08.442 | 70.00th=[22676], 80.00th=[25297], 90.00th=[33162], 95.00th=[43254], 00:41:08.442 | 99.00th=[47449], 99.50th=[47973], 99.90th=[51643], 99.95th=[54264], 00:41:08.442 | 99.99th=[54264] 00:41:08.442 write: IOPS=1734, BW=6939KiB/s (7105kB/s)(7036KiB/1014msec); 0 zone resets 00:41:08.442 slat (usec): min=5, max=33884, avg=372.39, stdev=1846.42 00:41:08.442 clat (usec): min=11882, max=86077, avg=51485.86, stdev=14673.98 00:41:08.442 lat (usec): min=14073, max=86131, avg=51858.25, stdev=14793.74 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[18744], 5.00th=[30278], 10.00th=[32900], 20.00th=[40109], 00:41:08.442 | 30.00th=[42206], 40.00th=[48497], 50.00th=[51119], 60.00th=[53740], 00:41:08.442 | 70.00th=[56361], 80.00th=[64226], 90.00th=[74974], 95.00th=[79168], 00:41:08.442 | 99.00th=[82314], 99.50th=[83362], 99.90th=[84411], 99.95th=[86508], 00:41:08.442 | 99.99th=[86508] 00:41:08.442 bw ( KiB/s): min= 6456, max= 6592, per=14.71%, avg=6524.00, stdev=96.17, samples=2 00:41:08.442 iops : min= 1614, max= 1648, avg=1631.00, stdev=24.04, samples=2 00:41:08.442 lat (msec) : 20=13.93%, 50=55.57%, 100=30.50% 00:41:08.442 cpu : usr=2.37%, sys=5.43%, ctx=203, majf=0, minf=1 00:41:08.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:41:08.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:08.442 issued rwts: total=1536,1759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:08.442 job2: (groupid=0, jobs=1): err= 0: pid=3190470: Sun Nov 17 09:41:13 2024 00:41:08.442 read: IOPS=2072, BW=8288KiB/s (8487kB/s)(8396KiB/1013msec) 00:41:08.442 slat (usec): min=3, max=12684, avg=163.12, stdev=1059.55 00:41:08.442 clat (usec): min=3210, max=45055, avg=20429.77, stdev=7725.50 00:41:08.442 lat (usec): min=3857, max=45585, avg=20592.89, stdev=7818.44 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[12387], 5.00th=[12518], 10.00th=[12780], 20.00th=[13304], 00:41:08.442 | 30.00th=[13566], 40.00th=[16712], 50.00th=[19006], 60.00th=[19268], 00:41:08.442 | 70.00th=[23200], 80.00th=[29492], 90.00th=[30802], 95.00th=[35914], 00:41:08.442 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:41:08.442 | 99.99th=[44827] 00:41:08.442 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec); 0 zone resets 00:41:08.442 slat (usec): min=5, max=11620, avg=244.06, stdev=1068.10 00:41:08.442 clat (usec): min=877, max=88686, avg=33420.31, stdev=23200.72 00:41:08.442 lat (usec): min=886, max=88695, avg=33664.37, stdev=23360.76 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[ 3752], 5.00th=[ 8586], 10.00th=[13042], 20.00th=[13698], 00:41:08.442 | 30.00th=[15008], 40.00th=[16057], 50.00th=[22938], 60.00th=[36439], 00:41:08.442 | 70.00th=[44303], 80.00th=[56361], 90.00th=[70779], 95.00th=[79168], 00:41:08.442 | 99.00th=[86508], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:41:08.442 | 99.99th=[88605] 00:41:08.442 bw ( KiB/s): min= 7584, max=12288, per=22.41%, avg=9936.00, stdev=3326.23, samples=2 00:41:08.442 iops : min= 1896, max= 3072, avg=2484.00, stdev=831.56, samples=2 00:41:08.442 lat (usec) : 1000=0.06% 00:41:08.442 lat (msec) : 2=0.11%, 4=0.52%, 10=3.09%, 20=51.02%, 50=30.50% 00:41:08.442 lat (msec) : 100=14.70% 00:41:08.442 cpu : usr=1.48%, sys=4.35%, ctx=286, majf=0, minf=1 00:41:08.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:41:08.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:08.442 issued rwts: total=2099,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:08.442 job3: (groupid=0, jobs=1): err= 0: pid=3190481: Sun Nov 17 09:41:13 2024 00:41:08.442 read: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec) 00:41:08.442 slat (usec): min=3, max=21670, avg=219.64, stdev=1309.91 00:41:08.442 clat (usec): min=11884, max=79941, avg=26642.64, stdev=9533.36 00:41:08.442 lat (usec): min=11891, max=79956, avg=26862.27, stdev=9661.46 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[13435], 5.00th=[15008], 10.00th=[17171], 20.00th=[21365], 00:41:08.442 | 30.00th=[21627], 40.00th=[22414], 50.00th=[25297], 60.00th=[27395], 00:41:08.442 | 70.00th=[27919], 80.00th=[31065], 90.00th=[34866], 95.00th=[45876], 00:41:08.442 | 99.00th=[67634], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:41:08.442 | 99.99th=[80217] 00:41:08.442 write: IOPS=2286, BW=9146KiB/s (9366kB/s)(9256KiB/1012msec); 0 zone resets 00:41:08.442 slat (usec): min=3, max=33728, avg=225.26, stdev=1483.94 00:41:08.442 clat (usec): min=10060, max=93652, avg=31501.60, stdev=15250.36 00:41:08.442 lat (usec): min=10070, max=93691, avg=31726.86, stdev=15347.24 00:41:08.442 clat percentiles (usec): 00:41:08.442 | 1.00th=[11469], 5.00th=[14877], 10.00th=[15533], 20.00th=[15795], 00:41:08.442 | 30.00th=[22152], 40.00th=[28967], 50.00th=[30278], 60.00th=[32113], 00:41:08.442 | 70.00th=[33817], 80.00th=[38011], 90.00th=[57934], 95.00th=[61604], 00:41:08.442 | 99.00th=[82314], 99.50th=[84411], 99.90th=[84411], 99.95th=[85459], 00:41:08.442 | 99.99th=[93848] 00:41:08.442 bw ( KiB/s): min= 8192, max= 9304, per=19.73%, avg=8748.00, stdev=786.30, samples=2 00:41:08.442 iops : min= 2048, max= 2326, avg=2187.00, stdev=196.58, samples=2 00:41:08.442 lat (msec) : 20=21.60%, 50=70.70%, 100=7.70% 00:41:08.442 cpu : usr=2.47%, sys=6.43%, ctx=217, majf=0, minf=1 00:41:08.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:41:08.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:08.442 issued rwts: total=2048,2314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:08.442 00:41:08.442 Run status group 0 (all jobs): 00:41:08.442 READ: bw=39.0MiB/s (40.9MB/s), 6059KiB/s-17.1MiB/s (6205kB/s-17.9MB/s), io=39.5MiB (41.4MB), run=1012-1014msec 00:41:08.442 WRITE: bw=43.3MiB/s (45.4MB/s), 6939KiB/s-17.8MiB/s (7105kB/s-18.6MB/s), io=43.9MiB (46.0MB), run=1012-1014msec 00:41:08.442 00:41:08.442 Disk stats (read/write): 00:41:08.442 nvme0n1: ios=3619/3831, merge=0/0, ticks=24261/25522, in_queue=49783, util=98.40% 00:41:08.442 nvme0n2: ios=1054/1487, merge=0/0, ticks=13826/38554, in_queue=52380, util=100.00% 00:41:08.442 nvme0n3: ios=2089/2159, merge=0/0, ticks=24181/37292, in_queue=61473, util=96.34% 00:41:08.442 nvme0n4: ios=1777/2048, merge=0/0, ticks=16613/20565, in_queue=37178, util=96.63% 00:41:08.442 09:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:08.442 [global] 00:41:08.442 thread=1 00:41:08.442 invalidate=1 00:41:08.442 rw=randwrite 00:41:08.442 time_based=1 00:41:08.442 runtime=1 00:41:08.442 ioengine=libaio 00:41:08.442 direct=1 00:41:08.442 bs=4096 00:41:08.442 iodepth=128 00:41:08.442 norandommap=0 00:41:08.442 numjobs=1 00:41:08.442 00:41:08.442 verify_dump=1 00:41:08.442 verify_backlog=512 00:41:08.442 verify_state_save=0 00:41:08.442 do_verify=1 00:41:08.442 verify=crc32c-intel 00:41:08.442 [job0] 00:41:08.442 filename=/dev/nvme0n1 00:41:08.442 [job1] 00:41:08.442 filename=/dev/nvme0n2 00:41:08.442 [job2] 00:41:08.443 filename=/dev/nvme0n3 00:41:08.443 [job3] 00:41:08.443 filename=/dev/nvme0n4 00:41:08.443 Could not set queue depth (nvme0n1) 00:41:08.443 Could not set queue depth (nvme0n2) 00:41:08.443 Could not set queue depth (nvme0n3) 00:41:08.443 Could not set queue depth (nvme0n4) 00:41:08.443 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.443 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.443 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.443 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.443 fio-3.35 00:41:08.443 Starting 4 threads 00:41:09.817 00:41:09.817 job0: (groupid=0, jobs=1): err= 0: pid=3190716: Sun Nov 17 09:41:14 2024 00:41:09.817 read: IOPS=3799, BW=14.8MiB/s (15.6MB/s)(15.6MiB/1054msec) 00:41:09.817 slat (usec): min=2, max=26745, avg=129.93, stdev=1107.05 00:41:09.817 clat (usec): min=2691, max=82153, avg=17672.05, stdev=11348.91 00:41:09.817 lat (usec): min=3033, max=98796, avg=17801.98, stdev=11427.87 00:41:09.817 clat percentiles (usec): 00:41:09.817 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11731], 20.00th=[12387], 00:41:09.817 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13566], 60.00th=[14484], 00:41:09.817 | 70.00th=[17957], 80.00th=[20317], 90.00th=[25822], 95.00th=[35914], 00:41:09.817 | 99.00th=[81265], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:41:09.817 | 99.99th=[82314] 00:41:09.818 write: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(16.0MiB/1054msec); 0 zone resets 00:41:09.818 slat (usec): min=3, max=19452, avg=106.10, stdev=983.13 00:41:09.818 clat (usec): min=1759, max=54238, avg=15279.68, stdev=6650.17 00:41:09.818 lat (usec): min=1767, max=54263, avg=15385.78, stdev=6721.64 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[ 3458], 5.00th=[ 8029], 10.00th=[ 9503], 20.00th=[11338], 00:41:09.818 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13304], 60.00th=[14222], 00:41:09.818 | 70.00th=[15533], 80.00th=[19006], 90.00th=[25297], 95.00th=[30016], 00:41:09.818 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:41:09.818 | 99.99th=[54264] 00:41:09.818 bw ( KiB/s): min=13616, max=19152, per=33.27%, avg=16384.00, stdev=3914.54, samples=2 00:41:09.818 iops : min= 3404, max= 4788, avg=4096.00, stdev=978.64, samples=2 00:41:09.818 lat (msec) : 2=0.09%, 4=0.81%, 10=6.76%, 20=73.20%, 50=17.55% 00:41:09.818 lat (msec) : 100=1.58% 00:41:09.818 cpu : usr=2.28%, sys=2.75%, ctx=281, majf=0, minf=2 00:41:09.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:09.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.818 issued rwts: total=4005,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.818 job1: (groupid=0, jobs=1): err= 0: pid=3190717: Sun Nov 17 09:41:14 2024 00:41:09.818 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:41:09.818 slat (usec): min=2, max=26937, avg=188.81, stdev=1484.67 00:41:09.818 clat (usec): min=1688, max=65228, avg=25697.94, stdev=12014.10 00:41:09.818 lat (usec): min=1691, max=65241, avg=25886.75, stdev=12107.04 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[ 7111], 5.00th=[ 7439], 10.00th=[ 9503], 20.00th=[13829], 00:41:09.818 | 30.00th=[17957], 40.00th=[21103], 50.00th=[24511], 60.00th=[28967], 00:41:09.818 | 70.00th=[32900], 80.00th=[36439], 90.00th=[42206], 95.00th=[46924], 00:41:09.818 | 99.00th=[51643], 99.50th=[52167], 99.90th=[58459], 99.95th=[60556], 00:41:09.818 | 99.99th=[65274] 00:41:09.818 write: IOPS=2699, BW=10.5MiB/s (11.1MB/s)(10.7MiB/1013msec); 0 zone resets 00:41:09.818 slat (usec): min=3, max=23600, avg=182.56, stdev=1445.82 00:41:09.818 clat (usec): min=1532, max=56243, avg=22610.72, stdev=8775.65 00:41:09.818 lat (usec): min=3360, max=56260, avg=22793.28, stdev=8906.10 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[12780], 20.00th=[13566], 00:41:09.818 | 30.00th=[16319], 40.00th=[20317], 50.00th=[21890], 60.00th=[26346], 00:41:09.818 | 70.00th=[28443], 80.00th=[30016], 90.00th=[35390], 95.00th=[38536], 00:41:09.818 | 99.00th=[39584], 99.50th=[39584], 99.90th=[49021], 99.95th=[51643], 00:41:09.818 | 99.99th=[56361] 00:41:09.818 bw ( KiB/s): min= 8568, max=12288, per=21.18%, avg=10428.00, stdev=2630.44, samples=2 00:41:09.818 iops : min= 2142, max= 3072, avg=2607.00, stdev=657.61, samples=2 00:41:09.818 lat (msec) : 2=0.23%, 4=0.15%, 10=8.12%, 20=29.16%, 50=61.59% 00:41:09.818 lat (msec) : 100=0.76% 00:41:09.818 cpu : usr=1.68%, sys=2.96%, ctx=140, majf=0, minf=1 00:41:09.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:09.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.818 issued rwts: total=2560,2735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.818 job2: (groupid=0, jobs=1): err= 0: pid=3190718: Sun Nov 17 09:41:14 2024 00:41:09.818 read: IOPS=1641, BW=6567KiB/s (6724kB/s)(6652KiB/1013msec) 00:41:09.818 slat (usec): min=2, max=24199, avg=274.16, stdev=1676.01 00:41:09.818 clat (usec): min=2130, max=61186, avg=34128.62, stdev=7723.41 00:41:09.818 lat (usec): min=13989, max=61192, avg=34402.77, stdev=7770.75 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[17433], 5.00th=[19530], 10.00th=[25822], 20.00th=[28967], 00:41:09.818 | 30.00th=[29230], 40.00th=[31851], 50.00th=[33817], 60.00th=[34341], 00:41:09.818 | 70.00th=[36963], 80.00th=[41157], 90.00th=[44303], 95.00th=[45351], 00:41:09.818 | 99.00th=[52167], 99.50th=[52691], 99.90th=[61080], 99.95th=[61080], 00:41:09.818 | 99.99th=[61080] 00:41:09.818 write: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec); 0 zone resets 00:41:09.818 slat (usec): min=3, max=30593, avg=264.58, stdev=1842.16 00:41:09.818 clat (usec): min=16224, max=77771, avg=34591.40, stdev=13493.65 00:41:09.818 lat (usec): min=16234, max=77776, avg=34855.99, stdev=13650.81 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[16319], 5.00th=[18482], 10.00th=[20841], 20.00th=[22414], 00:41:09.818 | 30.00th=[26084], 40.00th=[29230], 50.00th=[31065], 60.00th=[34866], 00:41:09.818 | 70.00th=[38011], 80.00th=[43779], 90.00th=[53216], 95.00th=[64226], 00:41:09.818 | 99.00th=[72877], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:41:09.818 | 99.99th=[78119] 00:41:09.818 bw ( KiB/s): min= 8184, max= 8192, per=16.63%, avg=8188.00, stdev= 5.66, samples=2 00:41:09.818 iops : min= 2046, max= 2048, avg=2047.00, stdev= 1.41, samples=2 00:41:09.818 lat (msec) : 4=0.03%, 20=7.79%, 50=81.49%, 100=10.70% 00:41:09.818 cpu : usr=0.99%, sys=2.27%, ctx=115, majf=0, minf=1 00:41:09.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:41:09.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.818 issued rwts: total=1663,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.818 job3: (groupid=0, jobs=1): err= 0: pid=3190719: Sun Nov 17 09:41:14 2024 00:41:09.818 read: IOPS=3820, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1005msec) 00:41:09.818 slat (usec): min=2, max=16220, avg=128.57, stdev=964.33 00:41:09.818 clat (usec): min=2793, max=32752, avg=16636.44, stdev=4814.02 00:41:09.818 lat (usec): min=6176, max=32759, avg=16765.02, stdev=4889.92 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11731], 20.00th=[12518], 00:41:09.818 | 30.00th=[13566], 40.00th=[14877], 50.00th=[15926], 60.00th=[16450], 00:41:09.818 | 70.00th=[17695], 80.00th=[19792], 90.00th=[24249], 95.00th=[26870], 00:41:09.818 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32637], 99.95th=[32637], 00:41:09.818 | 99.99th=[32637] 00:41:09.818 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:41:09.818 slat (usec): min=3, max=14311, avg=113.72, stdev=727.32 00:41:09.818 clat (usec): min=4112, max=32751, avg=15447.40, stdev=2854.38 00:41:09.818 lat (usec): min=4118, max=32761, avg=15561.12, stdev=2927.38 00:41:09.818 clat percentiles (usec): 00:41:09.818 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[11469], 20.00th=[14091], 00:41:09.818 | 30.00th=[15139], 40.00th=[15533], 50.00th=[16057], 60.00th=[16319], 00:41:09.818 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:41:09.818 | 99.00th=[18744], 99.50th=[27132], 99.90th=[32375], 99.95th=[32375], 00:41:09.818 | 99.99th=[32637] 00:41:09.818 bw ( KiB/s): min=16384, max=16416, per=33.31%, avg=16400.00, stdev=22.63, samples=2 00:41:09.818 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:41:09.818 lat (msec) : 4=0.01%, 10=4.75%, 20=85.24%, 50=9.99% 00:41:09.818 cpu : usr=3.39%, sys=4.58%, ctx=419, majf=0, minf=1 00:41:09.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:09.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.818 issued rwts: total=3840,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.818 00:41:09.818 Run status group 0 (all jobs): 00:41:09.818 READ: bw=44.7MiB/s (46.9MB/s), 6567KiB/s-14.9MiB/s (6724kB/s-15.7MB/s), io=47.1MiB (49.4MB), run=1005-1054msec 00:41:09.818 WRITE: bw=48.1MiB/s (50.4MB/s), 8087KiB/s-15.9MiB/s (8281kB/s-16.7MB/s), io=50.7MiB (53.1MB), run=1005-1054msec 00:41:09.818 00:41:09.818 Disk stats (read/write): 00:41:09.818 nvme0n1: ios=3492/3584, merge=0/0, ticks=45285/45077, in_queue=90362, util=89.48% 00:41:09.818 nvme0n2: ios=2187/2560, merge=0/0, ticks=29484/31145, in_queue=60629, util=98.48% 00:41:09.818 nvme0n3: ios=1580/1543, merge=0/0, ticks=26675/27466, in_queue=54141, util=93.76% 00:41:09.818 nvme0n4: ios=3098/3534, merge=0/0, ticks=51186/54430, in_queue=105616, util=98.01% 00:41:09.818 09:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:09.818 09:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3190853 00:41:09.818 09:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:09.818 09:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:09.818 [global] 00:41:09.818 thread=1 00:41:09.818 invalidate=1 00:41:09.818 rw=read 00:41:09.818 time_based=1 00:41:09.818 runtime=10 00:41:09.818 ioengine=libaio 00:41:09.818 direct=1 00:41:09.818 bs=4096 00:41:09.818 iodepth=1 00:41:09.818 norandommap=1 00:41:09.818 numjobs=1 00:41:09.818 00:41:09.818 [job0] 00:41:09.818 filename=/dev/nvme0n1 00:41:09.818 [job1] 00:41:09.818 filename=/dev/nvme0n2 00:41:09.818 [job2] 00:41:09.818 filename=/dev/nvme0n3 00:41:09.818 [job3] 00:41:09.818 filename=/dev/nvme0n4 00:41:09.818 Could not set queue depth (nvme0n1) 00:41:09.818 Could not set queue depth (nvme0n2) 00:41:09.818 Could not set queue depth (nvme0n3) 00:41:09.818 Could not set queue depth (nvme0n4) 00:41:10.076 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:10.076 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:10.076 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:10.076 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:10.076 fio-3.35 00:41:10.076 Starting 4 threads 00:41:13.358 09:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:13.358 09:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:13.358 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=294912, buflen=4096 00:41:13.358 fio: pid=3190950, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:13.358 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:13.358 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:13.358 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6078464, buflen=4096 00:41:13.358 fio: pid=3190949, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:13.617 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5361664, buflen=4096 00:41:13.617 fio: pid=3190947, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:13.617 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:13.617 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:14.184 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=39055360, buflen=4096 00:41:14.184 fio: pid=3190948, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:41:14.184 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.184 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:14.184 00:41:14.184 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3190947: Sun Nov 17 09:41:18 2024 00:41:14.184 read: IOPS=369, BW=1476KiB/s (1512kB/s)(5236KiB/3547msec) 00:41:14.184 slat (usec): min=4, max=15650, avg=28.65, stdev=540.97 00:41:14.184 clat (usec): min=242, max=43993, avg=2661.04, stdev=9519.08 00:41:14.184 lat (usec): min=249, max=44013, avg=2689.67, stdev=9531.55 00:41:14.184 clat percentiles (usec): 00:41:14.184 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:41:14.184 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:41:14.184 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 388], 95.00th=[40633], 00:41:14.184 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[43779], 00:41:14.184 | 99.99th=[43779] 00:41:14.184 bw ( KiB/s): min= 104, max= 2056, per=3.66%, avg=465.33, stdev=780.67, samples=6 00:41:14.184 iops : min= 26, max= 514, avg=116.33, stdev=195.17, samples=6 00:41:14.184 lat (usec) : 250=0.15%, 500=92.82%, 750=0.99%, 1000=0.15% 00:41:14.184 lat (msec) : 50=5.80% 00:41:14.184 cpu : usr=0.14%, sys=0.51%, ctx=1313, majf=0, minf=2 00:41:14.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 issued rwts: total=1310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:14.184 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3190948: Sun Nov 17 09:41:18 2024 00:41:14.184 read: IOPS=2445, BW=9782KiB/s (10.0MB/s)(37.2MiB/3899msec) 00:41:14.184 slat (usec): min=5, max=25886, avg=13.64, stdev=336.08 00:41:14.184 clat (usec): min=201, max=41281, avg=392.97, stdev=2319.23 00:41:14.184 lat (usec): min=206, max=67011, avg=405.85, stdev=2423.42 00:41:14.184 clat percentiles (usec): 00:41:14.184 | 1.00th=[ 239], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 247], 00:41:14.184 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:41:14.184 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:41:14.184 | 99.00th=[ 338], 99.50th=[ 420], 99.90th=[41157], 99.95th=[41157], 00:41:14.184 | 99.99th=[41157] 00:41:14.184 bw ( KiB/s): min= 1256, max=15376, per=84.56%, avg=10757.86, stdev=5834.76, samples=7 00:41:14.184 iops : min= 314, max= 3844, avg=2689.43, stdev=1458.74, samples=7 00:41:14.184 lat (usec) : 250=37.69%, 500=61.88%, 750=0.03%, 1000=0.02% 00:41:14.184 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01%, 50=0.33% 00:41:14.184 cpu : usr=1.33%, sys=3.36%, ctx=9539, majf=0, minf=1 00:41:14.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 issued rwts: total=9536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:14.184 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3190949: Sun Nov 17 09:41:18 2024 00:41:14.184 read: IOPS=457, BW=1828KiB/s (1872kB/s)(5936KiB/3247msec) 00:41:14.184 slat (nsec): min=4550, max=53136, avg=12546.55, stdev=6490.74 00:41:14.184 clat (usec): min=247, max=43018, avg=2157.01, stdev=8320.94 00:41:14.184 lat (usec): min=252, max=43040, avg=2169.56, stdev=8322.51 00:41:14.184 clat percentiles (usec): 00:41:14.184 | 1.00th=[ 289], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:41:14.184 | 30.00th=[ 355], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 388], 00:41:14.184 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 570], 00:41:14.184 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[43254], 00:41:14.184 | 99.99th=[43254] 00:41:14.184 bw ( KiB/s): min= 96, max= 6936, per=9.76%, avg=1241.33, stdev=2789.82, samples=6 00:41:14.184 iops : min= 24, max= 1734, avg=310.33, stdev=697.46, samples=6 00:41:14.184 lat (usec) : 250=0.07%, 500=94.34%, 750=0.88%, 1000=0.13% 00:41:14.184 lat (msec) : 2=0.13%, 50=4.38% 00:41:14.184 cpu : usr=0.25%, sys=0.65%, ctx=1485, majf=0, minf=2 00:41:14.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 issued rwts: total=1485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:14.184 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3190950: Sun Nov 17 09:41:18 2024 00:41:14.184 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:41:14.184 slat (nsec): min=7413, max=34748, avg=19600.10, stdev=7454.05 00:41:14.184 clat (usec): min=402, max=41093, avg=40412.64, stdev=4781.71 00:41:14.184 lat (usec): min=427, max=41112, avg=40432.23, stdev=4781.09 00:41:14.184 clat percentiles (usec): 00:41:14.184 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:14.184 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:14.184 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:14.184 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:14.184 | 99.99th=[41157] 00:41:14.184 bw ( KiB/s): min= 96, max= 104, per=0.78%, avg=99.00, stdev= 4.12, samples=5 00:41:14.184 iops : min= 24, max= 26, avg=24.60, stdev= 0.89, samples=5 00:41:14.184 lat (usec) : 500=1.37% 00:41:14.184 lat (msec) : 50=97.26% 00:41:14.184 cpu : usr=0.07%, sys=0.00%, ctx=76, majf=0, minf=1 00:41:14.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:14.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.184 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:14.184 00:41:14.184 Run status group 0 (all jobs): 00:41:14.184 READ: bw=12.4MiB/s (13.0MB/s), 98.1KiB/s-9782KiB/s (100kB/s-10.0MB/s), io=48.4MiB (50.8MB), run=2935-3899msec 00:41:14.184 00:41:14.184 Disk stats (read/write): 00:41:14.184 nvme0n1: ios=1304/0, merge=0/0, ticks=3275/0, in_queue=3275, util=95.34% 00:41:14.184 nvme0n2: ios=9534/0, merge=0/0, ticks=3629/0, in_queue=3629, util=95.78% 00:41:14.184 nvme0n3: ios=1192/0, merge=0/0, ticks=3084/0, in_queue=3084, util=96.82% 00:41:14.184 nvme0n4: ios=107/0, merge=0/0, ticks=3262/0, in_queue=3262, util=99.63% 00:41:14.441 09:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.441 09:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:14.698 09:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.698 09:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:14.956 09:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.956 09:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:15.522 09:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:15.522 09:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:15.780 09:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:15.780 09:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3190853 00:41:15.780 09:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:15.780 09:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:16.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:16.714 nvmf hotplug test: fio failed as expected 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:16.714 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:16.714 rmmod nvme_tcp 00:41:16.972 rmmod nvme_fabrics 00:41:16.972 rmmod nvme_keyring 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3188710 ']' 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3188710 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3188710 ']' 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3188710 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3188710 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3188710' 00:41:16.972 killing process with pid 3188710 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3188710 00:41:16.972 09:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3188710 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.346 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.347 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.347 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.347 09:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:20.248 00:41:20.248 real 0m27.544s 00:41:20.248 user 1m14.772s 00:41:20.248 sys 0m10.346s 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:20.248 ************************************ 00:41:20.248 END TEST nvmf_fio_target 00:41:20.248 ************************************ 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:20.248 ************************************ 00:41:20.248 START TEST nvmf_bdevio 00:41:20.248 ************************************ 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:20.248 * Looking for test storage... 00:41:20.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:41:20.248 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.537 --rc genhtml_branch_coverage=1 00:41:20.537 --rc genhtml_function_coverage=1 00:41:20.537 --rc genhtml_legend=1 00:41:20.537 --rc geninfo_all_blocks=1 00:41:20.537 --rc geninfo_unexecuted_blocks=1 00:41:20.537 00:41:20.537 ' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.537 --rc genhtml_branch_coverage=1 00:41:20.537 --rc genhtml_function_coverage=1 00:41:20.537 --rc genhtml_legend=1 00:41:20.537 --rc geninfo_all_blocks=1 00:41:20.537 --rc geninfo_unexecuted_blocks=1 00:41:20.537 00:41:20.537 ' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.537 --rc genhtml_branch_coverage=1 00:41:20.537 --rc genhtml_function_coverage=1 00:41:20.537 --rc genhtml_legend=1 00:41:20.537 --rc geninfo_all_blocks=1 00:41:20.537 --rc geninfo_unexecuted_blocks=1 00:41:20.537 00:41:20.537 ' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:20.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.537 --rc genhtml_branch_coverage=1 00:41:20.537 --rc genhtml_function_coverage=1 00:41:20.537 --rc genhtml_legend=1 00:41:20.537 --rc geninfo_all_blocks=1 00:41:20.537 --rc geninfo_unexecuted_blocks=1 00:41:20.537 00:41:20.537 ' 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:20.537 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:20.538 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:22.484 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:22.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:22.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:22.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:22.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:22.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:22.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:41:22.485 00:41:22.485 --- 10.0.0.2 ping statistics --- 00:41:22.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:22.485 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:22.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:22.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:41:22.485 00:41:22.485 --- 10.0.0.1 ping statistics --- 00:41:22.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:22.485 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:41:22.485 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3193829 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3193829 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3193829 ']' 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:22.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:22.486 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:22.744 [2024-11-17 09:41:27.540959] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:22.744 [2024-11-17 09:41:27.543528] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:22.744 [2024-11-17 09:41:27.543621] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:22.744 [2024-11-17 09:41:27.697184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:23.003 [2024-11-17 09:41:27.842913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.003 [2024-11-17 09:41:27.842992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.003 [2024-11-17 09:41:27.843022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:23.003 [2024-11-17 09:41:27.843044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:23.003 [2024-11-17 09:41:27.843066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.003 [2024-11-17 09:41:27.846080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:23.003 [2024-11-17 09:41:27.846142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:23.003 [2024-11-17 09:41:27.846193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:23.003 [2024-11-17 09:41:27.846220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:23.262 [2024-11-17 09:41:28.213931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:23.262 [2024-11-17 09:41:28.224685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:23.262 [2024-11-17 09:41:28.224905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:23.262 [2024-11-17 09:41:28.225717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:23.262 [2024-11-17 09:41:28.226078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.527 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.789 [2024-11-17 09:41:28.535308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.789 Malloc0 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.789 [2024-11-17 09:41:28.659549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:23.789 { 00:41:23.789 "params": { 00:41:23.789 "name": "Nvme$subsystem", 00:41:23.789 "trtype": "$TEST_TRANSPORT", 00:41:23.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:23.789 "adrfam": "ipv4", 00:41:23.789 "trsvcid": "$NVMF_PORT", 00:41:23.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:23.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:23.789 "hdgst": ${hdgst:-false}, 00:41:23.789 "ddgst": ${ddgst:-false} 00:41:23.789 }, 00:41:23.789 "method": "bdev_nvme_attach_controller" 00:41:23.789 } 00:41:23.789 EOF 00:41:23.789 )") 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:23.789 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:23.789 "params": { 00:41:23.789 "name": "Nvme1", 00:41:23.789 "trtype": "tcp", 00:41:23.789 "traddr": "10.0.0.2", 00:41:23.789 "adrfam": "ipv4", 00:41:23.789 "trsvcid": "4420", 00:41:23.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:23.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:23.789 "hdgst": false, 00:41:23.789 "ddgst": false 00:41:23.789 }, 00:41:23.789 "method": "bdev_nvme_attach_controller" 00:41:23.789 }' 00:41:23.789 [2024-11-17 09:41:28.743351] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:23.789 [2024-11-17 09:41:28.743506] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193985 ] 00:41:24.047 [2024-11-17 09:41:28.880392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:24.047 [2024-11-17 09:41:29.013154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:24.047 [2024-11-17 09:41:29.013203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:24.047 [2024-11-17 09:41:29.013207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:24.613 I/O targets: 00:41:24.613 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:24.613 00:41:24.613 00:41:24.613 CUnit - A unit testing framework for C - Version 2.1-3 00:41:24.613 http://cunit.sourceforge.net/ 00:41:24.613 00:41:24.613 00:41:24.613 Suite: bdevio tests on: Nvme1n1 00:41:24.613 Test: blockdev write read block ...passed 00:41:24.613 Test: blockdev write zeroes read block ...passed 00:41:24.613 Test: blockdev write zeroes read no split ...passed 00:41:24.613 Test: blockdev write zeroes read split ...passed 00:41:24.613 Test: blockdev write zeroes read split partial ...passed 00:41:24.613 Test: blockdev reset ...[2024-11-17 09:41:29.592933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:24.613 [2024-11-17 09:41:29.593126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:41:24.613 [2024-11-17 09:41:29.601428] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:24.613 passed 00:41:24.871 Test: blockdev write read 8 blocks ...passed 00:41:24.871 Test: blockdev write read size > 128k ...passed 00:41:24.871 Test: blockdev write read invalid size ...passed 00:41:24.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:24.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:24.871 Test: blockdev write read max offset ...passed 00:41:24.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:24.871 Test: blockdev writev readv 8 blocks ...passed 00:41:24.871 Test: blockdev writev readv 30 x 1block ...passed 00:41:24.871 Test: blockdev writev readv block ...passed 00:41:24.871 Test: blockdev writev readv size > 128k ...passed 00:41:24.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:24.871 Test: blockdev comparev and writev ...[2024-11-17 09:41:29.855503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.855556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.855596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.855624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.856160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.856195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.856229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.856255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.856809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.856852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.856886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.856912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.857471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.857505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:24.871 [2024-11-17 09:41:29.857539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:24.871 [2024-11-17 09:41:29.857564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:25.129 passed 00:41:25.129 Test: blockdev nvme passthru rw ...passed 00:41:25.129 Test: blockdev nvme passthru vendor specific ...[2024-11-17 09:41:29.939734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:25.129 [2024-11-17 09:41:29.939775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:25.129 [2024-11-17 09:41:29.939990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:25.129 [2024-11-17 09:41:29.940024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:25.129 [2024-11-17 09:41:29.940238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:25.129 [2024-11-17 09:41:29.940270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:25.129 [2024-11-17 09:41:29.940489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:25.129 [2024-11-17 09:41:29.940521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:25.129 passed 00:41:25.129 Test: blockdev nvme admin passthru ...passed 00:41:25.129 Test: blockdev copy ...passed 00:41:25.129 00:41:25.129 Run Summary: Type Total Ran Passed Failed Inactive 00:41:25.129 suites 1 1 n/a 0 0 00:41:25.129 tests 23 23 23 0 0 00:41:25.129 asserts 152 152 152 0 n/a 00:41:25.129 00:41:25.129 Elapsed time = 1.162 seconds 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:26.064 rmmod nvme_tcp 00:41:26.064 rmmod nvme_fabrics 00:41:26.064 rmmod nvme_keyring 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3193829 ']' 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3193829 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3193829 ']' 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3193829 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193829 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193829' 00:41:26.064 killing process with pid 3193829 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3193829 00:41:26.064 09:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3193829 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:27.438 09:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.344 09:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:29.344 00:41:29.344 real 0m9.206s 00:41:29.344 user 0m15.822s 00:41:29.344 sys 0m3.099s 00:41:29.344 09:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.344 09:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.344 ************************************ 00:41:29.344 END TEST nvmf_bdevio 00:41:29.344 ************************************ 00:41:29.603 09:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:29.603 00:41:29.603 real 4m29.395s 00:41:29.603 user 9m51.290s 00:41:29.603 sys 1m28.529s 00:41:29.603 09:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.603 09:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:29.603 ************************************ 00:41:29.603 END TEST nvmf_target_core_interrupt_mode 00:41:29.603 ************************************ 00:41:29.603 09:41:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:29.603 09:41:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:29.603 09:41:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.603 09:41:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:29.603 ************************************ 00:41:29.603 START TEST nvmf_interrupt 00:41:29.603 ************************************ 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:29.603 * Looking for test storage... 00:41:29.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:29.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.603 --rc genhtml_branch_coverage=1 00:41:29.603 --rc genhtml_function_coverage=1 00:41:29.603 --rc genhtml_legend=1 00:41:29.603 --rc geninfo_all_blocks=1 00:41:29.603 --rc geninfo_unexecuted_blocks=1 00:41:29.603 00:41:29.603 ' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:29.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.603 --rc genhtml_branch_coverage=1 00:41:29.603 --rc genhtml_function_coverage=1 00:41:29.603 --rc genhtml_legend=1 00:41:29.603 --rc geninfo_all_blocks=1 00:41:29.603 --rc geninfo_unexecuted_blocks=1 00:41:29.603 00:41:29.603 ' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:29.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.603 --rc genhtml_branch_coverage=1 00:41:29.603 --rc genhtml_function_coverage=1 00:41:29.603 --rc genhtml_legend=1 00:41:29.603 --rc geninfo_all_blocks=1 00:41:29.603 --rc geninfo_unexecuted_blocks=1 00:41:29.603 00:41:29.603 ' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:29.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.603 --rc genhtml_branch_coverage=1 00:41:29.603 --rc genhtml_function_coverage=1 00:41:29.603 --rc genhtml_legend=1 00:41:29.603 --rc geninfo_all_blocks=1 00:41:29.603 --rc geninfo_unexecuted_blocks=1 00:41:29.603 00:41:29.603 ' 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:29.603 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:29.604 09:41:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.134 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:32.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:32.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:32.135 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:32.135 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:32.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:32.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:41:32.135 00:41:32.135 --- 10.0.0.2 ping statistics --- 00:41:32.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.135 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:32.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:32.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:41:32.135 00:41:32.135 --- 10.0.0.1 ping statistics --- 00:41:32.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.135 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:32.135 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3196331 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3196331 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3196331 ']' 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:32.136 09:41:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.136 [2024-11-17 09:41:36.793220] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:32.136 [2024-11-17 09:41:36.796017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:32.136 [2024-11-17 09:41:36.796132] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.136 [2024-11-17 09:41:36.942971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:32.136 [2024-11-17 09:41:37.076846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:32.136 [2024-11-17 09:41:37.076910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:32.136 [2024-11-17 09:41:37.076949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:32.136 [2024-11-17 09:41:37.076967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:32.136 [2024-11-17 09:41:37.076992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:32.136 [2024-11-17 09:41:37.079274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.136 [2024-11-17 09:41:37.079278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:32.703 [2024-11-17 09:41:37.426885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:32.703 [2024-11-17 09:41:37.427620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:32.703 [2024-11-17 09:41:37.427957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:32.963 5000+0 records in 00:41:32.963 5000+0 records out 00:41:32.963 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0141635 s, 723 MB/s 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.963 AIO0 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.963 [2024-11-17 09:41:37.844366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.963 [2024-11-17 09:41:37.872641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3196331 0 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196331 0 idle 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:32.963 09:41:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196331 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.73 reactor_0' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196331 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.73 reactor_0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3196331 1 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196331 1 idle 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196335 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.00 reactor_1' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196335 root 20 0 20.1t 196608 101376 S 0.0 0.3 0:00.00 reactor_1 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3196508 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3196331 0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3196331 0 busy 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:33.222 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196331 root 20 0 20.1t 199680 102144 R 13.3 0.3 0:00.75 reactor_0' 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196331 root 20 0 20.1t 199680 102144 R 13.3 0.3 0:00.75 reactor_0 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:33.480 09:41:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:41:34.413 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:41:34.413 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:34.413 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:34.413 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196331 root 20 0 20.1t 210432 102144 R 99.9 0.3 0:03.02 reactor_0' 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196331 root 20 0 20.1t 210432 102144 R 99.9 0.3 0:03.02 reactor_0 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3196331 1 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3196331 1 busy 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:34.671 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:34.672 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:34.672 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:34.672 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:34.672 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196335 root 20 0 20.1t 210432 102144 R 93.3 0.3 0:01.27 reactor_1' 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196335 root 20 0 20.1t 210432 102144 R 93.3 0.3 0:01.27 reactor_1 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:34.930 09:41:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3196508 00:41:44.894 Initializing NVMe Controllers 00:41:44.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:44.894 Controller IO queue size 256, less than required. 00:41:44.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:44.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:44.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:44.894 Initialization complete. Launching workers. 00:41:44.894 ======================================================== 00:41:44.894 Latency(us) 00:41:44.894 Device Information : IOPS MiB/s Average min max 00:41:44.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10304.87 40.25 24864.87 6361.26 30478.68 00:41:44.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10847.37 42.37 23618.83 6524.29 28866.61 00:41:44.894 ======================================================== 00:41:44.894 Total : 21152.23 82.63 24225.87 6361.26 30478.68 00:41:44.894 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3196331 0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196331 0 idle 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196331 root 20 0 20.1t 210432 102144 S 0.0 0.3 0:20.24 reactor_0' 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196331 root 20 0 20.1t 210432 102144 S 0.0 0.3 0:20.24 reactor_0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3196331 1 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196331 1 idle 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196335 root 20 0 20.1t 210432 102144 S 0.0 0.3 0:09.53 reactor_1' 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196335 root 20 0 20.1t 210432 102144 S 0.0 0.3 0:09.53 reactor_1 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:44.894 09:41:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:44.895 09:41:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:44.895 09:41:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:44.895 09:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:44.895 09:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:44.895 09:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:44.895 09:41:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3196331 0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196331 0 idle 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196331 root 20 0 20.1t 238080 111744 S 0.0 0.4 0:20.41 reactor_0' 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196331 root 20 0 20.1t 238080 111744 S 0.0 0.4 0:20.41 reactor_0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:46.271 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3196331 1 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3196331 1 idle 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3196331 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3196331 -w 256 00:41:46.272 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3196335 root 20 0 20.1t 238080 111744 S 0.0 0.4 0:09.60 reactor_1' 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3196335 root 20 0 20.1t 238080 111744 S 0.0 0.4 0:09.60 reactor_1 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:46.530 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:46.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:46.788 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:46.789 rmmod nvme_tcp 00:41:46.789 rmmod nvme_fabrics 00:41:46.789 rmmod nvme_keyring 00:41:46.789 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3196331 ']' 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3196331 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3196331 ']' 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3196331 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196331 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196331' 00:41:47.047 killing process with pid 3196331 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3196331 00:41:47.047 09:41:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3196331 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:47.982 09:41:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:50.518 09:41:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:50.518 00:41:50.518 real 0m20.547s 00:41:50.518 user 0m38.805s 00:41:50.518 sys 0m6.587s 00:41:50.518 09:41:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:50.518 09:41:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:50.518 ************************************ 00:41:50.518 END TEST nvmf_interrupt 00:41:50.518 ************************************ 00:41:50.518 00:41:50.518 real 35m35.552s 00:41:50.518 user 93m24.947s 00:41:50.518 sys 7m50.681s 00:41:50.518 09:41:54 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:50.518 09:41:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:50.518 ************************************ 00:41:50.518 END TEST nvmf_tcp 00:41:50.518 ************************************ 00:41:50.518 09:41:55 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:50.518 09:41:55 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:50.518 09:41:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:50.518 09:41:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:50.518 09:41:55 -- common/autotest_common.sh@10 -- # set +x 00:41:50.518 ************************************ 00:41:50.518 START TEST spdkcli_nvmf_tcp 00:41:50.518 ************************************ 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:50.518 * Looking for test storage... 00:41:50.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:50.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:50.518 --rc genhtml_branch_coverage=1 00:41:50.518 --rc genhtml_function_coverage=1 00:41:50.518 --rc genhtml_legend=1 00:41:50.518 --rc geninfo_all_blocks=1 00:41:50.518 --rc geninfo_unexecuted_blocks=1 00:41:50.518 00:41:50.518 ' 00:41:50.518 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:50.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:50.518 --rc genhtml_branch_coverage=1 00:41:50.518 --rc genhtml_function_coverage=1 00:41:50.518 --rc genhtml_legend=1 00:41:50.518 --rc geninfo_all_blocks=1 00:41:50.518 --rc geninfo_unexecuted_blocks=1 00:41:50.518 00:41:50.518 ' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:50.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:50.519 --rc genhtml_branch_coverage=1 00:41:50.519 --rc genhtml_function_coverage=1 00:41:50.519 --rc genhtml_legend=1 00:41:50.519 --rc geninfo_all_blocks=1 00:41:50.519 --rc geninfo_unexecuted_blocks=1 00:41:50.519 00:41:50.519 ' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:50.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:50.519 --rc genhtml_branch_coverage=1 00:41:50.519 --rc genhtml_function_coverage=1 00:41:50.519 --rc genhtml_legend=1 00:41:50.519 --rc geninfo_all_blocks=1 00:41:50.519 --rc geninfo_unexecuted_blocks=1 00:41:50.519 00:41:50.519 ' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:50.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3198640 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3198640 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3198640 ']' 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:50.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:50.519 09:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:50.519 [2024-11-17 09:41:55.282975] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:41:50.519 [2024-11-17 09:41:55.283148] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198640 ] 00:41:50.519 [2024-11-17 09:41:55.443298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:50.778 [2024-11-17 09:41:55.587114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.778 [2024-11-17 09:41:55.587117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:51.344 09:41:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:51.344 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:51.344 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:51.344 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:51.344 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:51.344 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:51.344 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:51.344 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:51.344 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:51.344 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:51.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:51.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:51.345 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:51.345 ' 00:41:54.627 [2024-11-17 09:41:59.086382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:55.559 [2024-11-17 09:42:00.363911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:58.116 [2024-11-17 09:42:02.727548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:00.013 [2024-11-17 09:42:04.774258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:01.386 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:01.386 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:01.386 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:01.386 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:01.386 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:01.386 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:01.386 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:01.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:01.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:01.386 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:01.386 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:01.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:01.387 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:01.644 09:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:01.902 09:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:02.160 09:42:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:02.160 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:02.160 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:02.160 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:02.160 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:02.160 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:02.160 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:02.160 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:02.160 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:02.160 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:02.160 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:02.160 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:02.160 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:02.160 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:02.160 ' 00:42:08.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:08.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:08.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:08.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:08.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:08.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:08.716 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:08.716 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:08.716 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:08.716 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:08.716 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:08.716 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:08.716 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:08.716 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3198640 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3198640 ']' 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3198640 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198640 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198640' 00:42:08.716 killing process with pid 3198640 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3198640 00:42:08.716 09:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3198640 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3198640 ']' 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3198640 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3198640 ']' 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3198640 00:42:09.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3198640) - No such process 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3198640 is not found' 00:42:09.284 Process with pid 3198640 is not found 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:09.284 00:42:09.284 real 0m19.000s 00:42:09.284 user 0m39.849s 00:42:09.284 sys 0m1.059s 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:09.284 09:42:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:09.284 ************************************ 00:42:09.284 END TEST spdkcli_nvmf_tcp 00:42:09.284 ************************************ 00:42:09.284 09:42:14 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:09.284 09:42:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:09.284 09:42:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:09.284 09:42:14 -- common/autotest_common.sh@10 -- # set +x 00:42:09.284 ************************************ 00:42:09.284 START TEST nvmf_identify_passthru 00:42:09.284 ************************************ 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:09.284 * Looking for test storage... 00:42:09.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:09.284 09:42:14 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.284 --rc genhtml_branch_coverage=1 00:42:09.284 --rc genhtml_function_coverage=1 00:42:09.284 --rc genhtml_legend=1 00:42:09.284 --rc geninfo_all_blocks=1 00:42:09.284 --rc geninfo_unexecuted_blocks=1 00:42:09.284 00:42:09.284 ' 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.284 --rc genhtml_branch_coverage=1 00:42:09.284 --rc genhtml_function_coverage=1 00:42:09.284 --rc genhtml_legend=1 00:42:09.284 --rc geninfo_all_blocks=1 00:42:09.284 --rc geninfo_unexecuted_blocks=1 00:42:09.284 00:42:09.284 ' 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.284 --rc genhtml_branch_coverage=1 00:42:09.284 --rc genhtml_function_coverage=1 00:42:09.284 --rc genhtml_legend=1 00:42:09.284 --rc geninfo_all_blocks=1 00:42:09.284 --rc geninfo_unexecuted_blocks=1 00:42:09.284 00:42:09.284 ' 00:42:09.284 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:09.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.284 --rc genhtml_branch_coverage=1 00:42:09.284 --rc genhtml_function_coverage=1 00:42:09.284 --rc genhtml_legend=1 00:42:09.284 --rc geninfo_all_blocks=1 00:42:09.284 --rc geninfo_unexecuted_blocks=1 00:42:09.284 00:42:09.284 ' 00:42:09.284 09:42:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:09.284 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:09.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:09.285 09:42:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:09.285 09:42:14 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:09.285 09:42:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.285 09:42:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.285 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:09.285 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:09.285 09:42:14 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:09.285 09:42:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:11.815 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:11.815 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:11.815 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:11.815 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:11.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:11.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:11.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:11.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:11.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:11.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:42:11.816 00:42:11.816 --- 10.0.0.2 ping statistics --- 00:42:11.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.816 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:11.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:11.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:42:11.816 00:42:11.816 --- 10.0.0.1 ping statistics --- 00:42:11.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.816 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:11.816 09:42:16 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:11.816 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:11.816 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:11.816 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:11.816 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:11.816 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:11.816 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:11.816 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:11.816 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:42:11.817 09:42:16 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:42:11.817 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:42:11.817 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:42:11.817 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:11.817 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:11.817 09:42:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:16.014 09:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:42:16.014 09:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:16.014 09:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:16.014 09:42:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3204154 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:21.280 09:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3204154 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3204154 ']' 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:21.280 09:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.280 [2024-11-17 09:42:25.451765] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:21.280 [2024-11-17 09:42:25.451925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:21.280 [2024-11-17 09:42:25.606767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:21.280 [2024-11-17 09:42:25.753419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:21.280 [2024-11-17 09:42:25.753515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:21.280 [2024-11-17 09:42:25.753542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:21.280 [2024-11-17 09:42:25.753577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:21.280 [2024-11-17 09:42:25.753601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:21.280 [2024-11-17 09:42:25.756545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:21.280 [2024-11-17 09:42:25.756619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:21.280 [2024-11-17 09:42:25.756671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.280 [2024-11-17 09:42:25.756677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:21.538 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:21.538 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:21.538 09:42:26 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:21.538 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.538 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.538 INFO: Log level set to 20 00:42:21.538 INFO: Requests: 00:42:21.538 { 00:42:21.538 "jsonrpc": "2.0", 00:42:21.538 "method": "nvmf_set_config", 00:42:21.538 "id": 1, 00:42:21.538 "params": { 00:42:21.538 "admin_cmd_passthru": { 00:42:21.538 "identify_ctrlr": true 00:42:21.538 } 00:42:21.538 } 00:42:21.538 } 00:42:21.538 00:42:21.538 INFO: response: 00:42:21.538 { 00:42:21.539 "jsonrpc": "2.0", 00:42:21.539 "id": 1, 00:42:21.539 "result": true 00:42:21.539 } 00:42:21.539 00:42:21.539 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.539 09:42:26 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:21.539 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.539 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.539 INFO: Setting log level to 20 00:42:21.539 INFO: Setting log level to 20 00:42:21.539 INFO: Log level set to 20 00:42:21.539 INFO: Log level set to 20 00:42:21.539 INFO: Requests: 00:42:21.539 { 00:42:21.539 "jsonrpc": "2.0", 00:42:21.539 "method": "framework_start_init", 00:42:21.539 "id": 1 00:42:21.539 } 00:42:21.539 00:42:21.539 INFO: Requests: 00:42:21.539 { 00:42:21.539 "jsonrpc": "2.0", 00:42:21.539 "method": "framework_start_init", 00:42:21.539 "id": 1 00:42:21.539 } 00:42:21.539 00:42:21.797 [2024-11-17 09:42:26.787077] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:21.797 INFO: response: 00:42:21.797 { 00:42:21.797 "jsonrpc": "2.0", 00:42:21.797 "id": 1, 00:42:21.797 "result": true 00:42:21.797 } 00:42:21.797 00:42:21.797 INFO: response: 00:42:21.797 { 00:42:21.797 "jsonrpc": "2.0", 00:42:21.797 "id": 1, 00:42:21.797 "result": true 00:42:21.797 } 00:42:21.797 00:42:21.797 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.797 09:42:26 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:21.797 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.797 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:21.797 INFO: Setting log level to 40 00:42:21.797 INFO: Setting log level to 40 00:42:21.797 INFO: Setting log level to 40 00:42:21.797 [2024-11-17 09:42:26.800016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:22.056 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.056 09:42:26 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:22.056 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:22.056 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:22.056 09:42:26 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:42:22.056 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.056 09:42:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.337 Nvme0n1 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.337 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.337 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.337 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.337 [2024-11-17 09:42:29.757551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.337 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.337 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.337 [ 00:42:25.337 { 00:42:25.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:25.337 "subtype": "Discovery", 00:42:25.337 "listen_addresses": [], 00:42:25.337 "allow_any_host": true, 00:42:25.337 "hosts": [] 00:42:25.337 }, 00:42:25.337 { 00:42:25.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:25.337 "subtype": "NVMe", 00:42:25.337 "listen_addresses": [ 00:42:25.337 { 00:42:25.337 "trtype": "TCP", 00:42:25.337 "adrfam": "IPv4", 00:42:25.337 "traddr": "10.0.0.2", 00:42:25.337 "trsvcid": "4420" 00:42:25.337 } 00:42:25.337 ], 00:42:25.337 "allow_any_host": true, 00:42:25.337 "hosts": [], 00:42:25.337 "serial_number": "SPDK00000000000001", 00:42:25.337 "model_number": "SPDK bdev Controller", 00:42:25.337 "max_namespaces": 1, 00:42:25.337 "min_cntlid": 1, 00:42:25.337 "max_cntlid": 65519, 00:42:25.337 "namespaces": [ 00:42:25.337 { 00:42:25.337 "nsid": 1, 00:42:25.337 "bdev_name": "Nvme0n1", 00:42:25.337 "name": "Nvme0n1", 00:42:25.337 "nguid": "C35FFCABB5604DCAAA790AA226BE321A", 00:42:25.337 "uuid": "c35ffcab-b560-4dca-aa79-0aa226be321a" 00:42:25.337 } 00:42:25.337 ] 00:42:25.337 } 00:42:25.337 ] 00:42:25.338 09:42:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.338 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:25.338 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:25.338 09:42:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:25.338 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:42:25.338 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:25.338 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:25.338 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:25.597 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:25.597 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.597 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:25.597 09:42:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:25.597 rmmod nvme_tcp 00:42:25.597 rmmod nvme_fabrics 00:42:25.597 rmmod nvme_keyring 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3204154 ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3204154 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3204154 ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3204154 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204154 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204154' 00:42:25.597 killing process with pid 3204154 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3204154 00:42:25.597 09:42:30 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3204154 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:28.128 09:42:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:28.128 09:42:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:28.128 09:42:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:30.666 09:42:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:30.666 00:42:30.666 real 0m21.024s 00:42:30.666 user 0m34.073s 00:42:30.666 sys 0m3.668s 00:42:30.666 09:42:35 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:30.666 09:42:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:30.666 ************************************ 00:42:30.666 END TEST nvmf_identify_passthru 00:42:30.666 ************************************ 00:42:30.666 09:42:35 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:30.666 09:42:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:30.666 09:42:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:30.666 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:42:30.666 ************************************ 00:42:30.666 START TEST nvmf_dif 00:42:30.666 ************************************ 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:30.666 * Looking for test storage... 00:42:30.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:30.666 09:42:35 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:30.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.666 --rc genhtml_branch_coverage=1 00:42:30.666 --rc genhtml_function_coverage=1 00:42:30.666 --rc genhtml_legend=1 00:42:30.666 --rc geninfo_all_blocks=1 00:42:30.666 --rc geninfo_unexecuted_blocks=1 00:42:30.666 00:42:30.666 ' 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:30.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.666 --rc genhtml_branch_coverage=1 00:42:30.666 --rc genhtml_function_coverage=1 00:42:30.666 --rc genhtml_legend=1 00:42:30.666 --rc geninfo_all_blocks=1 00:42:30.666 --rc geninfo_unexecuted_blocks=1 00:42:30.666 00:42:30.666 ' 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:30.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.666 --rc genhtml_branch_coverage=1 00:42:30.666 --rc genhtml_function_coverage=1 00:42:30.666 --rc genhtml_legend=1 00:42:30.666 --rc geninfo_all_blocks=1 00:42:30.666 --rc geninfo_unexecuted_blocks=1 00:42:30.666 00:42:30.666 ' 00:42:30.666 09:42:35 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:30.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.666 --rc genhtml_branch_coverage=1 00:42:30.666 --rc genhtml_function_coverage=1 00:42:30.666 --rc genhtml_legend=1 00:42:30.666 --rc geninfo_all_blocks=1 00:42:30.666 --rc geninfo_unexecuted_blocks=1 00:42:30.666 00:42:30.666 ' 00:42:30.666 09:42:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:30.667 09:42:35 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:30.667 09:42:35 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:30.667 09:42:35 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:30.667 09:42:35 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:30.667 09:42:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.667 09:42:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.667 09:42:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.667 09:42:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:30.667 09:42:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:30.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:30.667 09:42:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:30.667 09:42:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:30.667 09:42:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:30.667 09:42:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:30.667 09:42:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:30.667 09:42:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:30.667 09:42:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:30.667 09:42:35 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:30.667 09:42:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:32.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:32.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:32.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:32.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:32.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:32.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:42:32.571 00:42:32.571 --- 10.0.0.2 ping statistics --- 00:42:32.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:32.571 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:32.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:32.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:42:32.571 00:42:32.571 --- 10.0.0.1 ping statistics --- 00:42:32.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:32.571 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:32.571 09:42:37 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:32.572 09:42:37 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:33.946 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:42:33.947 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:33.947 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:42:33.947 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:42:33.947 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:42:33.947 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:42:33.947 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:42:33.947 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:42:33.947 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:42:33.947 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:42:33.947 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:42:33.947 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:42:33.947 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:42:33.947 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:42:33.947 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:42:33.947 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:42:33.947 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:33.947 09:42:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:33.947 09:42:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3207691 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:33.947 09:42:38 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3207691 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3207691 ']' 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:33.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:33.947 09:42:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.947 [2024-11-17 09:42:38.890757] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:42:33.947 [2024-11-17 09:42:38.890899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:34.205 [2024-11-17 09:42:39.033572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.205 [2024-11-17 09:42:39.150974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:34.205 [2024-11-17 09:42:39.151057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:34.205 [2024-11-17 09:42:39.151078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:34.205 [2024-11-17 09:42:39.151099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:34.205 [2024-11-17 09:42:39.151115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:34.205 [2024-11-17 09:42:39.152536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:35.141 09:42:39 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 09:42:39 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:35.141 09:42:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:35.141 09:42:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 [2024-11-17 09:42:39.859639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.141 09:42:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 ************************************ 00:42:35.141 START TEST fio_dif_1_default 00:42:35.141 ************************************ 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 bdev_null0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:35.141 [2024-11-17 09:42:39.920013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:35.141 { 00:42:35.141 "params": { 00:42:35.141 "name": "Nvme$subsystem", 00:42:35.141 "trtype": "$TEST_TRANSPORT", 00:42:35.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:35.141 "adrfam": "ipv4", 00:42:35.141 "trsvcid": "$NVMF_PORT", 00:42:35.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:35.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:35.141 "hdgst": ${hdgst:-false}, 00:42:35.141 "ddgst": ${ddgst:-false} 00:42:35.141 }, 00:42:35.141 "method": "bdev_nvme_attach_controller" 00:42:35.141 } 00:42:35.141 EOF 00:42:35.141 )") 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:35.141 "params": { 00:42:35.141 "name": "Nvme0", 00:42:35.141 "trtype": "tcp", 00:42:35.141 "traddr": "10.0.0.2", 00:42:35.141 "adrfam": "ipv4", 00:42:35.141 "trsvcid": "4420", 00:42:35.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:35.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:35.141 "hdgst": false, 00:42:35.141 "ddgst": false 00:42:35.141 }, 00:42:35.141 "method": "bdev_nvme_attach_controller" 00:42:35.141 }' 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:35.141 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:42:35.142 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:35.142 09:42:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:35.401 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:35.401 fio-3.35 00:42:35.401 Starting 1 thread 00:42:47.666 00:42:47.666 filename0: (groupid=0, jobs=1): err= 0: pid=3208050: Sun Nov 17 09:42:51 2024 00:42:47.666 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10004msec) 00:42:47.666 slat (nsec): min=4677, max=73155, avg=12975.53, stdev=5194.18 00:42:47.666 clat (usec): min=690, max=42188, avg=20932.48, stdev=20180.04 00:42:47.666 lat (usec): min=700, max=42204, avg=20945.46, stdev=20179.80 00:42:47.666 clat percentiles (usec): 00:42:47.666 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 775], 00:42:47.666 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 4293], 60.00th=[41157], 00:42:47.666 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:47.666 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:42:47.666 | 99.99th=[42206] 00:42:47.666 bw ( KiB/s): min= 704, max= 768, per=99.75%, avg=761.60, stdev=19.70, samples=20 00:42:47.666 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:42:47.666 lat (usec) : 750=4.98%, 1000=44.92% 00:42:47.666 lat (msec) : 10=0.21%, 50=49.90% 00:42:47.666 cpu : usr=92.89%, sys=6.65%, ctx=17, majf=0, minf=1636 00:42:47.666 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:47.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.666 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:47.666 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:47.666 00:42:47.666 Run status group 0 (all jobs): 00:42:47.666 READ: bw=763KiB/s (781kB/s), 763KiB/s-763KiB/s (781kB/s-781kB/s), io=7632KiB (7815kB), run=10004-10004msec 00:42:47.666 ----------------------------------------------------- 00:42:47.666 Suppressions used: 00:42:47.666 count bytes template 00:42:47.666 1 8 /usr/src/fio/parse.c 00:42:47.666 1 8 libtcmalloc_minimal.so 00:42:47.666 1 904 libcrypto.so 00:42:47.666 ----------------------------------------------------- 00:42:47.666 00:42:47.666 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:47.666 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:47.666 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:47.666 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:47.666 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 00:42:47.667 real 0m12.495s 00:42:47.667 user 0m11.599s 00:42:47.667 sys 0m1.140s 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 ************************************ 00:42:47.667 END TEST fio_dif_1_default 00:42:47.667 ************************************ 00:42:47.667 09:42:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:47.667 09:42:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:47.667 09:42:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 ************************************ 00:42:47.667 START TEST fio_dif_1_multi_subsystems 00:42:47.667 ************************************ 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 bdev_null0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 [2024-11-17 09:42:52.468963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 bdev_null1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:47.667 { 00:42:47.667 "params": { 00:42:47.667 "name": "Nvme$subsystem", 00:42:47.667 "trtype": "$TEST_TRANSPORT", 00:42:47.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:47.667 "adrfam": "ipv4", 00:42:47.667 "trsvcid": "$NVMF_PORT", 00:42:47.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:47.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:47.667 "hdgst": ${hdgst:-false}, 00:42:47.667 "ddgst": ${ddgst:-false} 00:42:47.667 }, 00:42:47.667 "method": "bdev_nvme_attach_controller" 00:42:47.667 } 00:42:47.667 EOF 00:42:47.667 )") 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:47.667 { 00:42:47.667 "params": { 00:42:47.667 "name": "Nvme$subsystem", 00:42:47.667 "trtype": "$TEST_TRANSPORT", 00:42:47.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:47.667 "adrfam": "ipv4", 00:42:47.667 "trsvcid": "$NVMF_PORT", 00:42:47.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:47.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:47.667 "hdgst": ${hdgst:-false}, 00:42:47.667 "ddgst": ${ddgst:-false} 00:42:47.667 }, 00:42:47.667 "method": "bdev_nvme_attach_controller" 00:42:47.667 } 00:42:47.667 EOF 00:42:47.667 )") 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:47.667 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:47.668 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:47.668 "params": { 00:42:47.668 "name": "Nvme0", 00:42:47.668 "trtype": "tcp", 00:42:47.668 "traddr": "10.0.0.2", 00:42:47.668 "adrfam": "ipv4", 00:42:47.668 "trsvcid": "4420", 00:42:47.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:47.668 "hdgst": false, 00:42:47.668 "ddgst": false 00:42:47.668 }, 00:42:47.668 "method": "bdev_nvme_attach_controller" 00:42:47.668 },{ 00:42:47.668 "params": { 00:42:47.668 "name": "Nvme1", 00:42:47.668 "trtype": "tcp", 00:42:47.668 "traddr": "10.0.0.2", 00:42:47.668 "adrfam": "ipv4", 00:42:47.668 "trsvcid": "4420", 00:42:47.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:47.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:47.668 "hdgst": false, 00:42:47.668 "ddgst": false 00:42:47.668 }, 00:42:47.668 "method": "bdev_nvme_attach_controller" 00:42:47.668 }' 00:42:47.668 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:47.668 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:47.668 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:42:47.668 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:47.668 09:42:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.926 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:47.926 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:47.926 fio-3.35 00:42:47.926 Starting 2 threads 00:43:00.119 00:43:00.119 filename0: (groupid=0, jobs=1): err= 0: pid=3209635: Sun Nov 17 09:43:04 2024 00:43:00.119 read: IOPS=193, BW=774KiB/s (792kB/s)(7760KiB/10029msec) 00:43:00.119 slat (nsec): min=5440, max=67083, avg=14799.30, stdev=6541.09 00:43:00.119 clat (usec): min=687, max=41850, avg=20632.86, stdev=20162.67 00:43:00.119 lat (usec): min=697, max=41872, avg=20647.66, stdev=20160.95 00:43:00.119 clat percentiles (usec): 00:43:00.119 | 1.00th=[ 734], 5.00th=[ 775], 10.00th=[ 791], 20.00th=[ 807], 00:43:00.119 | 30.00th=[ 832], 40.00th=[ 1205], 50.00th=[ 1369], 60.00th=[41157], 00:43:00.119 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:00.119 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:00.119 | 99.99th=[41681] 00:43:00.119 bw ( KiB/s): min= 768, max= 832, per=50.72%, avg=774.40, stdev=19.70, samples=20 00:43:00.119 iops : min= 192, max= 208, avg=193.60, stdev= 4.92, samples=20 00:43:00.119 lat (usec) : 750=1.70%, 1000=35.82% 00:43:00.119 lat (msec) : 2=13.61%, 50=48.87% 00:43:00.119 cpu : usr=97.10%, sys=2.43%, ctx=15, majf=0, minf=1636 00:43:00.119 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:00.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.119 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:00.119 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:00.119 filename1: (groupid=0, jobs=1): err= 0: pid=3209636: Sun Nov 17 09:43:04 2024 00:43:00.119 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10034msec) 00:43:00.119 slat (nsec): min=5787, max=46351, avg=16236.13, stdev=6537.33 00:43:00.119 clat (usec): min=722, max=43036, avg=21208.97, stdev=20300.30 00:43:00.119 lat (usec): min=733, max=43063, avg=21225.20, stdev=20300.14 00:43:00.119 clat percentiles (usec): 00:43:00.119 | 1.00th=[ 766], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 873], 00:43:00.119 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 1434], 60.00th=[41157], 00:43:00.119 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:00.119 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:43:00.119 | 99.99th=[43254] 00:43:00.119 bw ( KiB/s): min= 672, max= 832, per=49.34%, avg=753.60, stdev=40.84, samples=20 00:43:00.119 iops : min= 168, max= 208, avg=188.40, stdev=10.21, samples=20 00:43:00.119 lat (usec) : 750=0.37%, 1000=45.07% 00:43:00.119 lat (msec) : 2=4.56%, 50=50.00% 00:43:00.119 cpu : usr=97.00%, sys=2.53%, ctx=14, majf=0, minf=1634 00:43:00.119 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:00.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.120 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:00.120 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:00.120 00:43:00.120 Run status group 0 (all jobs): 00:43:00.120 READ: bw=1526KiB/s (1563kB/s), 753KiB/s-774KiB/s (771kB/s-792kB/s), io=15.0MiB (15.7MB), run=10029-10034msec 00:43:00.378 ----------------------------------------------------- 00:43:00.378 Suppressions used: 00:43:00.378 count bytes template 00:43:00.378 2 16 /usr/src/fio/parse.c 00:43:00.378 1 8 libtcmalloc_minimal.so 00:43:00.378 1 904 libcrypto.so 00:43:00.378 ----------------------------------------------------- 00:43:00.378 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.378 00:43:00.378 real 0m12.904s 00:43:00.378 user 0m22.162s 00:43:00.378 sys 0m1.011s 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:00.378 09:43:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:00.378 ************************************ 00:43:00.378 END TEST fio_dif_1_multi_subsystems 00:43:00.378 ************************************ 00:43:00.378 09:43:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:00.378 09:43:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:00.378 09:43:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:00.378 09:43:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:00.637 ************************************ 00:43:00.637 START TEST fio_dif_rand_params 00:43:00.637 ************************************ 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:00.637 bdev_null0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:00.637 [2024-11-17 09:43:05.422590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:00.637 { 00:43:00.637 "params": { 00:43:00.637 "name": "Nvme$subsystem", 00:43:00.637 "trtype": "$TEST_TRANSPORT", 00:43:00.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:00.637 "adrfam": "ipv4", 00:43:00.637 "trsvcid": "$NVMF_PORT", 00:43:00.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:00.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:00.637 "hdgst": ${hdgst:-false}, 00:43:00.637 "ddgst": ${ddgst:-false} 00:43:00.637 }, 00:43:00.637 "method": "bdev_nvme_attach_controller" 00:43:00.637 } 00:43:00.637 EOF 00:43:00.637 )") 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:00.637 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:00.638 "params": { 00:43:00.638 "name": "Nvme0", 00:43:00.638 "trtype": "tcp", 00:43:00.638 "traddr": "10.0.0.2", 00:43:00.638 "adrfam": "ipv4", 00:43:00.638 "trsvcid": "4420", 00:43:00.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:00.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:00.638 "hdgst": false, 00:43:00.638 "ddgst": false 00:43:00.638 }, 00:43:00.638 "method": "bdev_nvme_attach_controller" 00:43:00.638 }' 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:00.638 09:43:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:00.897 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:00.897 ... 00:43:00.897 fio-3.35 00:43:00.897 Starting 3 threads 00:43:07.455 00:43:07.455 filename0: (groupid=0, jobs=1): err= 0: pid=3211118: Sun Nov 17 09:43:11 2024 00:43:07.455 read: IOPS=171, BW=21.4MiB/s (22.4MB/s)(107MiB/5005msec) 00:43:07.455 slat (nsec): min=7040, max=60875, avg=24570.61, stdev=7128.82 00:43:07.455 clat (usec): min=6133, max=58654, avg=17504.84, stdev=6704.63 00:43:07.456 lat (usec): min=6153, max=58682, avg=17529.41, stdev=6703.70 00:43:07.456 clat percentiles (usec): 00:43:07.456 | 1.00th=[ 6390], 5.00th=[12518], 10.00th=[13698], 20.00th=[14615], 00:43:07.456 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16712], 60.00th=[17433], 00:43:07.456 | 70.00th=[18220], 80.00th=[19006], 90.00th=[20317], 95.00th=[21365], 00:43:07.456 | 99.00th=[53216], 99.50th=[54789], 99.90th=[58459], 99.95th=[58459], 00:43:07.456 | 99.99th=[58459] 00:43:07.456 bw ( KiB/s): min=16929, max=23808, per=33.33%, avg=21865.70, stdev=1974.53, samples=10 00:43:07.456 iops : min= 132, max= 186, avg=170.80, stdev=15.50, samples=10 00:43:07.456 lat (msec) : 10=3.50%, 20=85.28%, 50=9.11%, 100=2.10% 00:43:07.456 cpu : usr=88.61%, sys=8.05%, ctx=127, majf=0, minf=1634 00:43:07.456 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.456 issued rwts: total=856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.456 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:07.456 filename0: (groupid=0, jobs=1): err= 0: pid=3211119: Sun Nov 17 09:43:11 2024 00:43:07.456 read: IOPS=180, BW=22.5MiB/s (23.6MB/s)(114MiB/5048msec) 00:43:07.456 slat (nsec): min=7526, max=50137, avg=20985.70, stdev=5323.01 00:43:07.456 clat (usec): min=5689, max=56450, avg=16584.18, stdev=6112.63 00:43:07.456 lat (usec): min=5700, max=56470, avg=16605.17, stdev=6112.85 00:43:07.456 clat percentiles (usec): 00:43:07.456 | 1.00th=[ 6259], 5.00th=[11863], 10.00th=[13304], 20.00th=[14091], 00:43:07.456 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15795], 60.00th=[16450], 00:43:07.456 | 70.00th=[17171], 80.00th=[17957], 90.00th=[19268], 95.00th=[20841], 00:43:07.456 | 99.00th=[49546], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:43:07.456 | 99.99th=[56361] 00:43:07.456 bw ( KiB/s): min=17664, max=25856, per=35.40%, avg=23219.20, stdev=2205.66, samples=10 00:43:07.456 iops : min= 138, max= 202, avg=181.40, stdev=17.23, samples=10 00:43:07.456 lat (msec) : 10=4.62%, 20=87.90%, 50=6.49%, 100=0.99% 00:43:07.456 cpu : usr=92.89%, sys=6.58%, ctx=8, majf=0, minf=1634 00:43:07.456 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.456 issued rwts: total=909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.456 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:07.456 filename0: (groupid=0, jobs=1): err= 0: pid=3211120: Sun Nov 17 09:43:11 2024 00:43:07.456 read: IOPS=162, BW=20.4MiB/s (21.3MB/s)(103MiB/5048msec) 00:43:07.456 slat (nsec): min=6892, max=46498, avg=20251.03, stdev=5135.51 00:43:07.456 clat (usec): min=6394, max=52752, avg=18343.99, stdev=6005.01 00:43:07.456 lat (usec): min=6405, max=52775, avg=18364.24, stdev=6004.92 00:43:07.456 clat percentiles (usec): 00:43:07.456 | 1.00th=[ 7832], 5.00th=[11863], 10.00th=[14091], 20.00th=[15533], 00:43:07.456 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17695], 60.00th=[18482], 00:43:07.456 | 70.00th=[19268], 80.00th=[20055], 90.00th=[21103], 95.00th=[22152], 00:43:07.456 | 99.00th=[50070], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:43:07.456 | 99.99th=[52691] 00:43:07.456 bw ( KiB/s): min=15104, max=24832, per=31.96%, avg=20966.40, stdev=2438.95, samples=10 00:43:07.456 iops : min= 118, max= 194, avg=163.80, stdev=19.05, samples=10 00:43:07.456 lat (msec) : 10=2.07%, 20=77.98%, 50=18.61%, 100=1.34% 00:43:07.456 cpu : usr=92.91%, sys=6.56%, ctx=9, majf=0, minf=1635 00:43:07.456 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.456 issued rwts: total=822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.456 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:07.456 00:43:07.456 Run status group 0 (all jobs): 00:43:07.456 READ: bw=64.1MiB/s (67.2MB/s), 20.4MiB/s-22.5MiB/s (21.3MB/s-23.6MB/s), io=323MiB (339MB), run=5005-5048msec 00:43:07.714 ----------------------------------------------------- 00:43:07.714 Suppressions used: 00:43:07.714 count bytes template 00:43:07.714 5 44 /usr/src/fio/parse.c 00:43:07.714 1 8 libtcmalloc_minimal.so 00:43:07.714 1 904 libcrypto.so 00:43:07.714 ----------------------------------------------------- 00:43:07.714 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.714 bdev_null0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.714 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.714 [2024-11-17 09:43:12.724261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 bdev_null1 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 bdev_null2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.972 { 00:43:07.972 "params": { 00:43:07.972 "name": "Nvme$subsystem", 00:43:07.972 "trtype": "$TEST_TRANSPORT", 00:43:07.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.972 "adrfam": "ipv4", 00:43:07.972 "trsvcid": "$NVMF_PORT", 00:43:07.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.972 "hdgst": ${hdgst:-false}, 00:43:07.972 "ddgst": ${ddgst:-false} 00:43:07.972 }, 00:43:07.972 "method": "bdev_nvme_attach_controller" 00:43:07.972 } 00:43:07.972 EOF 00:43:07.972 )") 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.972 { 00:43:07.972 "params": { 00:43:07.972 "name": "Nvme$subsystem", 00:43:07.972 "trtype": "$TEST_TRANSPORT", 00:43:07.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.972 "adrfam": "ipv4", 00:43:07.972 "trsvcid": "$NVMF_PORT", 00:43:07.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.972 "hdgst": ${hdgst:-false}, 00:43:07.972 "ddgst": ${ddgst:-false} 00:43:07.972 }, 00:43:07.972 "method": "bdev_nvme_attach_controller" 00:43:07.972 } 00:43:07.972 EOF 00:43:07.972 )") 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.972 { 00:43:07.972 "params": { 00:43:07.972 "name": "Nvme$subsystem", 00:43:07.972 "trtype": "$TEST_TRANSPORT", 00:43:07.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.972 "adrfam": "ipv4", 00:43:07.972 "trsvcid": "$NVMF_PORT", 00:43:07.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.972 "hdgst": ${hdgst:-false}, 00:43:07.972 "ddgst": ${ddgst:-false} 00:43:07.972 }, 00:43:07.972 "method": "bdev_nvme_attach_controller" 00:43:07.972 } 00:43:07.972 EOF 00:43:07.972 )") 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:07.972 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.973 "params": { 00:43:07.973 "name": "Nvme0", 00:43:07.973 "trtype": "tcp", 00:43:07.973 "traddr": "10.0.0.2", 00:43:07.973 "adrfam": "ipv4", 00:43:07.973 "trsvcid": "4420", 00:43:07.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.973 "hdgst": false, 00:43:07.973 "ddgst": false 00:43:07.973 }, 00:43:07.973 "method": "bdev_nvme_attach_controller" 00:43:07.973 },{ 00:43:07.973 "params": { 00:43:07.973 "name": "Nvme1", 00:43:07.973 "trtype": "tcp", 00:43:07.973 "traddr": "10.0.0.2", 00:43:07.973 "adrfam": "ipv4", 00:43:07.973 "trsvcid": "4420", 00:43:07.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:07.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:07.973 "hdgst": false, 00:43:07.973 "ddgst": false 00:43:07.973 }, 00:43:07.973 "method": "bdev_nvme_attach_controller" 00:43:07.973 },{ 00:43:07.973 "params": { 00:43:07.973 "name": "Nvme2", 00:43:07.973 "trtype": "tcp", 00:43:07.973 "traddr": "10.0.0.2", 00:43:07.973 "adrfam": "ipv4", 00:43:07.973 "trsvcid": "4420", 00:43:07.973 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:07.973 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:07.973 "hdgst": false, 00:43:07.973 "ddgst": false 00:43:07.973 }, 00:43:07.973 "method": "bdev_nvme_attach_controller" 00:43:07.973 }' 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:07.973 09:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:08.231 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:08.231 ... 00:43:08.231 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:08.231 ... 00:43:08.231 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:08.231 ... 00:43:08.231 fio-3.35 00:43:08.231 Starting 24 threads 00:43:20.518 00:43:20.518 filename0: (groupid=0, jobs=1): err= 0: pid=3212074: Sun Nov 17 09:43:24 2024 00:43:20.518 read: IOPS=355, BW=1422KiB/s (1457kB/s)(13.9MiB/10034msec) 00:43:20.518 slat (nsec): min=8964, max=95932, avg=28906.87, stdev=14249.00 00:43:20.518 clat (usec): min=16476, max=60416, avg=44738.33, stdev=2585.69 00:43:20.518 lat (usec): min=16503, max=60462, avg=44767.23, stdev=2585.99 00:43:20.518 clat percentiles (usec): 00:43:20.518 | 1.00th=[30540], 5.00th=[44303], 10.00th=[44303], 20.00th=[44303], 00:43:20.518 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:43:20.518 | 70.00th=[45351], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.518 | 99.00th=[46924], 99.50th=[46924], 99.90th=[60556], 99.95th=[60556], 00:43:20.518 | 99.99th=[60556] 00:43:20.518 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1420.80, stdev=39.40, samples=20 00:43:20.518 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:43:20.518 lat (msec) : 20=0.39%, 50=99.16%, 100=0.45% 00:43:20.518 cpu : usr=96.62%, sys=2.12%, ctx=246, majf=0, minf=1632 00:43:20.518 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.518 filename0: (groupid=0, jobs=1): err= 0: pid=3212075: Sun Nov 17 09:43:24 2024 00:43:20.518 read: IOPS=352, BW=1411KiB/s (1445kB/s)(13.8MiB/10023msec) 00:43:20.518 slat (usec): min=7, max=111, avg=60.33, stdev= 9.00 00:43:20.518 clat (msec): min=25, max=102, avg=44.81, stdev= 4.15 00:43:20.518 lat (msec): min=25, max=102, avg=44.87, stdev= 4.15 00:43:20.518 clat percentiles (msec): 00:43:20.518 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:43:20.518 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.518 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.518 | 99.00th=[ 47], 99.50th=[ 61], 99.90th=[ 103], 99.95th=[ 103], 00:43:20.518 | 99.99th=[ 103] 00:43:20.518 bw ( KiB/s): min= 1154, max= 1536, per=4.18%, avg=1408.11, stdev=73.52, samples=19 00:43:20.518 iops : min= 288, max= 384, avg=352.00, stdev=18.48, samples=19 00:43:20.518 lat (msec) : 50=99.49%, 100=0.06%, 250=0.45% 00:43:20.518 cpu : usr=96.12%, sys=2.30%, ctx=54, majf=0, minf=1635 00:43:20.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.518 filename0: (groupid=0, jobs=1): err= 0: pid=3212076: Sun Nov 17 09:43:24 2024 00:43:20.518 read: IOPS=353, BW=1413KiB/s (1446kB/s)(13.8MiB/10013msec) 00:43:20.518 slat (usec): min=13, max=112, avg=61.50, stdev= 9.10 00:43:20.518 clat (usec): min=39759, max=77892, avg=44749.46, stdev=2342.05 00:43:20.518 lat (usec): min=39798, max=77928, avg=44810.96, stdev=2339.43 00:43:20.518 clat percentiles (usec): 00:43:20.518 | 1.00th=[43254], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:43:20.518 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:43:20.518 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.518 | 99.00th=[46924], 99.50th=[47449], 99.90th=[78119], 99.95th=[78119], 00:43:20.518 | 99.99th=[78119] 00:43:20.518 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1408.00, stdev=71.93, samples=20 00:43:20.518 iops : min= 320, max= 384, avg=352.00, stdev=17.98, samples=20 00:43:20.518 lat (msec) : 50=99.55%, 100=0.45% 00:43:20.518 cpu : usr=94.98%, sys=2.85%, ctx=209, majf=0, minf=1633 00:43:20.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.518 filename0: (groupid=0, jobs=1): err= 0: pid=3212077: Sun Nov 17 09:43:24 2024 00:43:20.518 read: IOPS=352, BW=1412KiB/s (1445kB/s)(13.8MiB/10020msec) 00:43:20.518 slat (nsec): min=6734, max=76180, avg=22270.55, stdev=9328.69 00:43:20.518 clat (msec): min=21, max=128, avg=45.15, stdev= 5.31 00:43:20.518 lat (msec): min=21, max=128, avg=45.17, stdev= 5.31 00:43:20.518 clat percentiles (msec): 00:43:20.518 | 1.00th=[ 29], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.518 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:43:20.518 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:43:20.518 | 99.00th=[ 61], 99.50th=[ 62], 99.90th=[ 112], 99.95th=[ 129], 00:43:20.518 | 99.99th=[ 129] 00:43:20.518 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1401.26, stdev=67.32, samples=19 00:43:20.518 iops : min= 288, max= 384, avg=350.32, stdev=16.83, samples=19 00:43:20.518 lat (msec) : 50=98.81%, 100=0.74%, 250=0.45% 00:43:20.518 cpu : usr=98.27%, sys=1.25%, ctx=15, majf=0, minf=1633 00:43:20.518 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:43:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.518 filename0: (groupid=0, jobs=1): err= 0: pid=3212078: Sun Nov 17 09:43:24 2024 00:43:20.518 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10032msec) 00:43:20.518 slat (usec): min=6, max=121, avg=63.66, stdev=10.55 00:43:20.518 clat (usec): min=27141, max=99072, avg=44775.29, stdev=3872.64 00:43:20.518 lat (usec): min=27192, max=99090, avg=44838.95, stdev=3869.63 00:43:20.518 clat percentiles (usec): 00:43:20.518 | 1.00th=[43254], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:43:20.518 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:43:20.518 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.518 | 99.00th=[46924], 99.50th=[46924], 99.90th=[99091], 99.95th=[99091], 00:43:20.518 | 99.99th=[99091] 00:43:20.518 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1407.40, stdev=71.98, samples=20 00:43:20.518 iops : min= 288, max= 384, avg=351.85, stdev=17.99, samples=20 00:43:20.518 lat (msec) : 50=99.55%, 100=0.45% 00:43:20.518 cpu : usr=95.27%, sys=2.81%, ctx=167, majf=0, minf=1633 00:43:20.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.518 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.518 filename0: (groupid=0, jobs=1): err= 0: pid=3212079: Sun Nov 17 09:43:24 2024 00:43:20.518 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10019msec) 00:43:20.518 slat (usec): min=11, max=150, avg=45.52, stdev=20.70 00:43:20.519 clat (msec): min=21, max=110, avg=44.92, stdev= 4.80 00:43:20.519 lat (msec): min=21, max=110, avg=44.96, stdev= 4.80 00:43:20.519 clat percentiles (msec): 00:43:20.519 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.519 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.519 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.519 | 99.00th=[ 47], 99.50th=[ 61], 99.90th=[ 111], 99.95th=[ 111], 00:43:20.519 | 99.99th=[ 111] 00:43:20.519 bw ( KiB/s): min= 1154, max= 1536, per=4.15%, avg=1401.37, stdev=66.69, samples=19 00:43:20.519 iops : min= 288, max= 384, avg=350.32, stdev=16.78, samples=19 00:43:20.519 lat (msec) : 50=99.49%, 100=0.06%, 250=0.45% 00:43:20.519 cpu : usr=95.54%, sys=2.66%, ctx=603, majf=0, minf=1635 00:43:20.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.519 filename0: (groupid=0, jobs=1): err= 0: pid=3212080: Sun Nov 17 09:43:24 2024 00:43:20.519 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10018msec) 00:43:20.519 slat (usec): min=6, max=100, avg=57.32, stdev=12.36 00:43:20.519 clat (msec): min=27, max=105, avg=44.81, stdev= 3.64 00:43:20.519 lat (msec): min=27, max=105, avg=44.87, stdev= 3.64 00:43:20.519 clat percentiles (msec): 00:43:20.519 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:43:20.519 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.519 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.519 | 99.00th=[ 47], 99.50th=[ 47], 99.90th=[ 94], 99.95th=[ 106], 00:43:20.519 | 99.99th=[ 106] 00:43:20.519 bw ( KiB/s): min= 1152, max= 1539, per=4.18%, avg=1408.25, stdev=72.21, samples=20 00:43:20.519 iops : min= 288, max= 384, avg=352.00, stdev=17.98, samples=20 00:43:20.519 lat (msec) : 50=99.55%, 100=0.40%, 250=0.06% 00:43:20.519 cpu : usr=95.71%, sys=2.55%, ctx=198, majf=0, minf=1633 00:43:20.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.519 filename0: (groupid=0, jobs=1): err= 0: pid=3212081: Sun Nov 17 09:43:24 2024 00:43:20.519 read: IOPS=351, BW=1407KiB/s (1441kB/s)(13.8MiB/10004msec) 00:43:20.519 slat (nsec): min=14416, max=95200, avg=46815.21, stdev=17261.89 00:43:20.519 clat (msec): min=27, max=126, avg=45.04, stdev= 5.62 00:43:20.519 lat (msec): min=27, max=126, avg=45.08, stdev= 5.62 00:43:20.519 clat percentiles (msec): 00:43:20.519 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.519 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.519 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.519 | 99.00th=[ 47], 99.50th=[ 47], 99.90th=[ 127], 99.95th=[ 127], 00:43:20.519 | 99.99th=[ 127] 00:43:20.519 bw ( KiB/s): min= 1026, max= 1536, per=4.15%, avg=1401.37, stdev=99.40, samples=19 00:43:20.519 iops : min= 256, max= 384, avg=350.32, stdev=24.96, samples=19 00:43:20.519 lat (msec) : 50=99.55%, 250=0.45% 00:43:20.519 cpu : usr=97.35%, sys=1.84%, ctx=54, majf=0, minf=1633 00:43:20.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.519 filename1: (groupid=0, jobs=1): err= 0: pid=3212082: Sun Nov 17 09:43:24 2024 00:43:20.519 read: IOPS=355, BW=1422KiB/s (1456kB/s)(13.9MiB/10040msec) 00:43:20.519 slat (nsec): min=6853, max=99095, avg=43404.74, stdev=16006.63 00:43:20.519 clat (usec): min=16695, max=60198, avg=44641.79, stdev=2558.14 00:43:20.519 lat (usec): min=16714, max=60234, avg=44685.20, stdev=2560.69 00:43:20.519 clat percentiles (usec): 00:43:20.519 | 1.00th=[41157], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:43:20.519 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:43:20.519 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.519 | 99.00th=[46924], 99.50th=[57934], 99.90th=[60031], 99.95th=[60031], 00:43:20.519 | 99.99th=[60031] 00:43:20.519 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1420.80, stdev=39.40, samples=20 00:43:20.519 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:43:20.519 lat (msec) : 20=0.45%, 50=99.05%, 100=0.50% 00:43:20.519 cpu : usr=98.07%, sys=1.43%, ctx=19, majf=0, minf=1632 00:43:20.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.519 filename1: (groupid=0, jobs=1): err= 0: pid=3212083: Sun Nov 17 09:43:24 2024 00:43:20.519 read: IOPS=352, BW=1412KiB/s (1446kB/s)(13.8MiB/10019msec) 00:43:20.519 slat (usec): min=12, max=102, avg=60.24, stdev= 9.39 00:43:20.519 clat (msec): min=21, max=110, avg=44.79, stdev= 4.82 00:43:20.519 lat (msec): min=21, max=110, avg=44.85, stdev= 4.81 00:43:20.519 clat percentiles (msec): 00:43:20.519 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:43:20.519 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.519 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.519 | 99.00th=[ 47], 99.50th=[ 61], 99.90th=[ 111], 99.95th=[ 111], 00:43:20.519 | 99.99th=[ 111] 00:43:20.519 bw ( KiB/s): min= 1154, max= 1536, per=4.15%, avg=1401.37, stdev=66.69, samples=19 00:43:20.519 iops : min= 288, max= 384, avg=350.32, stdev=16.78, samples=19 00:43:20.519 lat (msec) : 50=99.49%, 100=0.06%, 250=0.45% 00:43:20.519 cpu : usr=97.57%, sys=1.73%, ctx=71, majf=0, minf=1633 00:43:20.519 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.519 filename1: (groupid=0, jobs=1): err= 0: pid=3212084: Sun Nov 17 09:43:24 2024 00:43:20.519 read: IOPS=354, BW=1416KiB/s (1450kB/s)(13.9MiB/10031msec) 00:43:20.519 slat (nsec): min=5326, max=74463, avg=31914.51, stdev=8436.97 00:43:20.519 clat (usec): min=17749, max=91395, avg=44905.52, stdev=1676.04 00:43:20.519 lat (usec): min=17782, max=91415, avg=44937.44, stdev=1675.31 00:43:20.519 clat percentiles (usec): 00:43:20.519 | 1.00th=[42730], 5.00th=[44303], 10.00th=[44303], 20.00th=[44303], 00:43:20.519 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:43:20.519 | 70.00th=[45351], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:43:20.519 | 99.00th=[46924], 99.50th=[46924], 99.90th=[55837], 99.95th=[91751], 00:43:20.519 | 99.99th=[91751] 00:43:20.519 bw ( KiB/s): min= 1280, max= 1536, per=4.19%, avg=1414.40, stdev=50.44, samples=20 00:43:20.519 iops : min= 320, max= 384, avg=353.60, stdev=12.61, samples=20 00:43:20.519 lat (msec) : 20=0.06%, 50=99.49%, 100=0.45% 00:43:20.519 cpu : usr=98.25%, sys=1.29%, ctx=28, majf=0, minf=1634 00:43:20.519 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.519 issued rwts: total=3552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename1: (groupid=0, jobs=1): err= 0: pid=3212085: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=351, BW=1407KiB/s (1441kB/s)(13.8MiB/10007msec) 00:43:20.520 slat (usec): min=11, max=119, avg=61.76, stdev=10.40 00:43:20.520 clat (msec): min=28, max=119, avg=44.92, stdev= 5.18 00:43:20.520 lat (msec): min=28, max=119, avg=44.98, stdev= 5.17 00:43:20.520 clat percentiles (msec): 00:43:20.520 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:43:20.520 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.520 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.520 | 99.00th=[ 47], 99.50th=[ 60], 99.90th=[ 120], 99.95th=[ 121], 00:43:20.520 | 99.99th=[ 121] 00:43:20.520 bw ( KiB/s): min= 1154, max= 1536, per=4.15%, avg=1401.37, stdev=79.17, samples=19 00:43:20.520 iops : min= 288, max= 384, avg=350.32, stdev=19.88, samples=19 00:43:20.520 lat (msec) : 50=99.43%, 100=0.11%, 250=0.45% 00:43:20.520 cpu : usr=97.60%, sys=1.68%, ctx=65, majf=0, minf=1633 00:43:20.520 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename1: (groupid=0, jobs=1): err= 0: pid=3212086: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=351, BW=1407KiB/s (1440kB/s)(13.8MiB/10036msec) 00:43:20.520 slat (usec): min=12, max=110, avg=61.58, stdev= 9.75 00:43:20.520 clat (msec): min=34, max=103, avg=44.87, stdev= 4.05 00:43:20.520 lat (msec): min=34, max=103, avg=44.93, stdev= 4.05 00:43:20.520 clat percentiles (msec): 00:43:20.520 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:43:20.520 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.520 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.520 | 99.00th=[ 47], 99.50th=[ 55], 99.90th=[ 104], 99.95th=[ 104], 00:43:20.520 | 99.99th=[ 104] 00:43:20.520 bw ( KiB/s): min= 1152, max= 1536, per=4.17%, avg=1406.85, stdev=72.11, samples=20 00:43:20.520 iops : min= 288, max= 384, avg=351.70, stdev=18.03, samples=20 00:43:20.520 lat (msec) : 50=99.43%, 100=0.11%, 250=0.45% 00:43:20.520 cpu : usr=95.58%, sys=2.43%, ctx=131, majf=0, minf=1632 00:43:20.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 issued rwts: total=3529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename1: (groupid=0, jobs=1): err= 0: pid=3212087: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.9MiB/10078msec) 00:43:20.520 slat (usec): min=7, max=105, avg=43.21, stdev=23.23 00:43:20.520 clat (usec): min=16408, max=77801, avg=44672.79, stdev=2485.27 00:43:20.520 lat (usec): min=16426, max=77829, avg=44716.01, stdev=2487.78 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[41157], 5.00th=[44303], 10.00th=[44303], 20.00th=[44303], 00:43:20.520 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:43:20.520 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.520 | 99.00th=[46924], 99.50th=[46924], 99.90th=[60556], 99.95th=[60556], 00:43:20.520 | 99.99th=[78119] 00:43:20.520 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1420.80, stdev=39.40, samples=20 00:43:20.520 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:43:20.520 lat (msec) : 20=0.45%, 50=99.07%, 100=0.48% 00:43:20.520 cpu : usr=97.09%, sys=1.87%, ctx=137, majf=0, minf=1634 00:43:20.520 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 issued rwts: total=3553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename1: (groupid=0, jobs=1): err= 0: pid=3212088: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=352, BW=1410KiB/s (1443kB/s)(13.8MiB/10006msec) 00:43:20.520 slat (usec): min=10, max=107, avg=24.49, stdev= 9.48 00:43:20.520 clat (msec): min=27, max=128, avg=45.21, stdev= 5.33 00:43:20.520 lat (msec): min=28, max=128, avg=45.23, stdev= 5.33 00:43:20.520 clat percentiles (msec): 00:43:20.520 | 1.00th=[ 30], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.520 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 46], 00:43:20.520 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:43:20.520 | 99.00th=[ 62], 99.50th=[ 70], 99.90th=[ 111], 99.95th=[ 129], 00:43:20.520 | 99.99th=[ 129] 00:43:20.520 bw ( KiB/s): min= 1138, max= 1536, per=4.16%, avg=1403.89, stdev=75.33, samples=19 00:43:20.520 iops : min= 284, max= 384, avg=350.95, stdev=18.93, samples=19 00:43:20.520 lat (msec) : 50=98.47%, 100=1.08%, 250=0.45% 00:43:20.520 cpu : usr=97.96%, sys=1.51%, ctx=20, majf=0, minf=1633 00:43:20.520 IO depths : 1=3.8%, 2=10.0%, 4=24.6%, 8=53.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 issued rwts: total=3526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename1: (groupid=0, jobs=1): err= 0: pid=3212089: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10029msec) 00:43:20.520 slat (nsec): min=7877, max=71009, avg=30812.39, stdev=8575.82 00:43:20.520 clat (msec): min=26, max=103, avg=45.09, stdev= 4.16 00:43:20.520 lat (msec): min=26, max=103, avg=45.12, stdev= 4.16 00:43:20.520 clat percentiles (msec): 00:43:20.520 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.520 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.520 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:43:20.520 | 99.00th=[ 47], 99.50th=[ 47], 99.90th=[ 104], 99.95th=[ 104], 00:43:20.520 | 99.99th=[ 104] 00:43:20.520 bw ( KiB/s): min= 1152, max= 1536, per=4.18%, avg=1408.00, stdev=73.90, samples=19 00:43:20.520 iops : min= 288, max= 384, avg=352.00, stdev=18.48, samples=19 00:43:20.520 lat (msec) : 50=99.55%, 250=0.45% 00:43:20.520 cpu : usr=98.13%, sys=1.39%, ctx=36, majf=0, minf=1631 00:43:20.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename2: (groupid=0, jobs=1): err= 0: pid=3212090: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=353, BW=1414KiB/s (1448kB/s)(13.8MiB/10005msec) 00:43:20.520 slat (usec): min=7, max=107, avg=62.58, stdev=10.67 00:43:20.520 clat (usec): min=39830, max=70661, avg=44704.99, stdev=1891.91 00:43:20.520 lat (usec): min=39875, max=70687, avg=44767.57, stdev=1889.05 00:43:20.520 clat percentiles (usec): 00:43:20.520 | 1.00th=[43254], 5.00th=[43779], 10.00th=[43779], 20.00th=[44303], 00:43:20.520 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:43:20.520 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.520 | 99.00th=[46924], 99.50th=[46924], 99.90th=[70779], 99.95th=[70779], 00:43:20.520 | 99.99th=[70779] 00:43:20.520 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1408.00, stdev=42.67, samples=19 00:43:20.520 iops : min= 320, max= 384, avg=352.00, stdev=10.67, samples=19 00:43:20.520 lat (msec) : 50=99.55%, 100=0.45% 00:43:20.520 cpu : usr=95.46%, sys=2.59%, ctx=167, majf=0, minf=1633 00:43:20.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.520 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.520 filename2: (groupid=0, jobs=1): err= 0: pid=3212091: Sun Nov 17 09:43:24 2024 00:43:20.520 read: IOPS=351, BW=1406KiB/s (1440kB/s)(13.8MiB/10013msec) 00:43:20.520 slat (nsec): min=13351, max=92754, avg=38915.19, stdev=17247.87 00:43:20.520 clat (msec): min=28, max=127, avg=45.12, stdev= 5.68 00:43:20.521 lat (msec): min=28, max=127, avg=45.16, stdev= 5.68 00:43:20.521 clat percentiles (msec): 00:43:20.521 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.521 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.521 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.521 | 99.00th=[ 47], 99.50th=[ 47], 99.90th=[ 128], 99.95th=[ 128], 00:43:20.521 | 99.99th=[ 128] 00:43:20.521 bw ( KiB/s): min= 1024, max= 1536, per=4.15%, avg=1401.26, stdev=99.82, samples=19 00:43:20.521 iops : min= 256, max= 384, avg=350.32, stdev=24.96, samples=19 00:43:20.521 lat (msec) : 50=99.55%, 250=0.45% 00:43:20.521 cpu : usr=97.11%, sys=1.81%, ctx=60, majf=0, minf=1631 00:43:20.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.521 filename2: (groupid=0, jobs=1): err= 0: pid=3212092: Sun Nov 17 09:43:24 2024 00:43:20.521 read: IOPS=353, BW=1413KiB/s (1447kB/s)(13.8MiB/10012msec) 00:43:20.521 slat (nsec): min=13341, max=99906, avg=50633.44, stdev=16646.49 00:43:20.521 clat (usec): min=27053, max=77949, avg=44847.80, stdev=2447.98 00:43:20.521 lat (usec): min=27072, max=77978, avg=44898.43, stdev=2447.63 00:43:20.521 clat percentiles (usec): 00:43:20.521 | 1.00th=[43254], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:43:20.521 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44827], 60.00th=[44827], 00:43:20.521 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.521 | 99.00th=[46924], 99.50th=[62129], 99.90th=[78119], 99.95th=[78119], 00:43:20.521 | 99.99th=[78119] 00:43:20.521 bw ( KiB/s): min= 1280, max= 1536, per=4.18%, avg=1408.00, stdev=73.90, samples=19 00:43:20.521 iops : min= 320, max= 384, avg=352.00, stdev=18.48, samples=19 00:43:20.521 lat (msec) : 50=99.43%, 100=0.57% 00:43:20.521 cpu : usr=96.24%, sys=2.28%, ctx=172, majf=0, minf=1633 00:43:20.521 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.521 filename2: (groupid=0, jobs=1): err= 0: pid=3212093: Sun Nov 17 09:43:24 2024 00:43:20.521 read: IOPS=351, BW=1408KiB/s (1442kB/s)(13.8MiB/10001msec) 00:43:20.521 slat (usec): min=10, max=102, avg=55.23, stdev=16.22 00:43:20.521 clat (msec): min=35, max=114, avg=44.96, stdev= 4.77 00:43:20.521 lat (msec): min=35, max=114, avg=45.01, stdev= 4.76 00:43:20.521 clat percentiles (msec): 00:43:20.521 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:43:20.521 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.521 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.521 | 99.00th=[ 47], 99.50th=[ 48], 99.90th=[ 115], 99.95th=[ 115], 00:43:20.521 | 99.99th=[ 115] 00:43:20.521 bw ( KiB/s): min= 1152, max= 1536, per=4.15%, avg=1401.79, stdev=67.17, samples=19 00:43:20.521 iops : min= 288, max= 384, avg=350.32, stdev=16.78, samples=19 00:43:20.521 lat (msec) : 50=99.55%, 250=0.45% 00:43:20.521 cpu : usr=96.95%, sys=2.09%, ctx=96, majf=0, minf=1631 00:43:20.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.521 filename2: (groupid=0, jobs=1): err= 0: pid=3212094: Sun Nov 17 09:43:24 2024 00:43:20.521 read: IOPS=355, BW=1422KiB/s (1457kB/s)(13.9MiB/10034msec) 00:43:20.521 slat (nsec): min=5283, max=73919, avg=32217.67, stdev=9571.49 00:43:20.521 clat (usec): min=16974, max=60012, avg=44710.68, stdev=2587.09 00:43:20.521 lat (usec): min=17004, max=60086, avg=44742.90, stdev=2587.18 00:43:20.521 clat percentiles (usec): 00:43:20.521 | 1.00th=[30278], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:43:20.521 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:43:20.521 | 70.00th=[45351], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:43:20.521 | 99.00th=[46924], 99.50th=[47449], 99.90th=[60031], 99.95th=[60031], 00:43:20.521 | 99.99th=[60031] 00:43:20.521 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1420.80, stdev=39.40, samples=20 00:43:20.521 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:43:20.521 lat (msec) : 20=0.45%, 50=99.10%, 100=0.45% 00:43:20.521 cpu : usr=98.12%, sys=1.46%, ctx=16, majf=0, minf=1634 00:43:20.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.521 filename2: (groupid=0, jobs=1): err= 0: pid=3212095: Sun Nov 17 09:43:24 2024 00:43:20.521 read: IOPS=355, BW=1422KiB/s (1456kB/s)(13.9MiB/10036msec) 00:43:20.521 slat (nsec): min=5088, max=75362, avg=30897.09, stdev=9497.42 00:43:20.521 clat (usec): min=16834, max=60011, avg=44730.92, stdev=2556.88 00:43:20.521 lat (usec): min=16872, max=60086, avg=44761.82, stdev=2556.76 00:43:20.521 clat percentiles (usec): 00:43:20.521 | 1.00th=[32375], 5.00th=[44303], 10.00th=[44303], 20.00th=[44303], 00:43:20.521 | 30.00th=[44303], 40.00th=[44827], 50.00th=[44827], 60.00th=[44827], 00:43:20.521 | 70.00th=[45351], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:43:20.521 | 99.00th=[46924], 99.50th=[47449], 99.90th=[60031], 99.95th=[60031], 00:43:20.521 | 99.99th=[60031] 00:43:20.521 bw ( KiB/s): min= 1408, max= 1536, per=4.21%, avg=1420.80, stdev=39.40, samples=20 00:43:20.521 iops : min= 352, max= 384, avg=355.20, stdev= 9.85, samples=20 00:43:20.521 lat (msec) : 20=0.45%, 50=99.10%, 100=0.45% 00:43:20.521 cpu : usr=98.25%, sys=1.30%, ctx=19, majf=0, minf=1634 00:43:20.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.521 filename2: (groupid=0, jobs=1): err= 0: pid=3212096: Sun Nov 17 09:43:24 2024 00:43:20.521 read: IOPS=350, BW=1402KiB/s (1436kB/s)(13.7MiB/10012msec) 00:43:20.521 slat (nsec): min=11233, max=91978, avg=32003.70, stdev=12370.02 00:43:20.521 clat (msec): min=26, max=138, avg=45.30, stdev= 6.43 00:43:20.521 lat (msec): min=26, max=138, avg=45.33, stdev= 6.43 00:43:20.521 clat percentiles (msec): 00:43:20.521 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:43:20.521 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 45], 00:43:20.521 | 70.00th=[ 46], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 46], 00:43:20.521 | 99.00th=[ 47], 99.50th=[ 71], 99.90th=[ 138], 99.95th=[ 138], 00:43:20.521 | 99.99th=[ 138] 00:43:20.521 bw ( KiB/s): min= 1024, max= 1536, per=4.15%, avg=1401.26, stdev=99.82, samples=19 00:43:20.521 iops : min= 256, max= 384, avg=350.32, stdev=24.96, samples=19 00:43:20.521 lat (msec) : 50=99.26%, 100=0.28%, 250=0.46% 00:43:20.521 cpu : usr=97.89%, sys=1.37%, ctx=72, majf=0, minf=1633 00:43:20.521 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.521 filename2: (groupid=0, jobs=1): err= 0: pid=3212098: Sun Nov 17 09:43:24 2024 00:43:20.521 read: IOPS=356, BW=1428KiB/s (1462kB/s)(14.0MiB/10041msec) 00:43:20.521 slat (usec): min=7, max=143, avg=62.11, stdev=11.15 00:43:20.521 clat (usec): min=13826, max=63681, avg=44267.81, stdev=3582.93 00:43:20.521 lat (usec): min=13858, max=63727, avg=44329.92, stdev=3586.05 00:43:20.521 clat percentiles (usec): 00:43:20.521 | 1.00th=[20841], 5.00th=[43779], 10.00th=[43779], 20.00th=[43779], 00:43:20.521 | 30.00th=[44303], 40.00th=[44303], 50.00th=[44303], 60.00th=[44827], 00:43:20.521 | 70.00th=[44827], 80.00th=[45351], 90.00th=[45351], 95.00th=[45876], 00:43:20.521 | 99.00th=[46924], 99.50th=[57410], 99.90th=[63701], 99.95th=[63701], 00:43:20.521 | 99.99th=[63701] 00:43:20.521 bw ( KiB/s): min= 1408, max= 1536, per=4.23%, avg=1427.20, stdev=46.89, samples=20 00:43:20.521 iops : min= 352, max= 384, avg=356.80, stdev=11.72, samples=20 00:43:20.521 lat (msec) : 20=0.89%, 50=98.60%, 100=0.50% 00:43:20.521 cpu : usr=94.08%, sys=3.34%, ctx=234, majf=0, minf=1635 00:43:20.521 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:20.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.521 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:20.522 00:43:20.522 Run status group 0 (all jobs): 00:43:20.522 READ: bw=32.9MiB/s (34.5MB/s), 1402KiB/s-1428KiB/s (1436kB/s-1462kB/s), io=332MiB (348MB), run=10001-10078msec 00:43:20.781 ----------------------------------------------------- 00:43:20.781 Suppressions used: 00:43:20.782 count bytes template 00:43:20.782 45 402 /usr/src/fio/parse.c 00:43:20.782 1 8 libtcmalloc_minimal.so 00:43:20.782 1 904 libcrypto.so 00:43:20.782 ----------------------------------------------------- 00:43:20.782 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.782 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 bdev_null0 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 [2024-11-17 09:43:25.858008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 bdev_null1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:21.042 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:21.042 { 00:43:21.042 "params": { 00:43:21.042 "name": "Nvme$subsystem", 00:43:21.042 "trtype": "$TEST_TRANSPORT", 00:43:21.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:21.042 "adrfam": "ipv4", 00:43:21.042 "trsvcid": "$NVMF_PORT", 00:43:21.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:21.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:21.042 "hdgst": ${hdgst:-false}, 00:43:21.042 "ddgst": ${ddgst:-false} 00:43:21.042 }, 00:43:21.042 "method": "bdev_nvme_attach_controller" 00:43:21.042 } 00:43:21.042 EOF 00:43:21.042 )") 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:21.043 { 00:43:21.043 "params": { 00:43:21.043 "name": "Nvme$subsystem", 00:43:21.043 "trtype": "$TEST_TRANSPORT", 00:43:21.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:21.043 "adrfam": "ipv4", 00:43:21.043 "trsvcid": "$NVMF_PORT", 00:43:21.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:21.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:21.043 "hdgst": ${hdgst:-false}, 00:43:21.043 "ddgst": ${ddgst:-false} 00:43:21.043 }, 00:43:21.043 "method": "bdev_nvme_attach_controller" 00:43:21.043 } 00:43:21.043 EOF 00:43:21.043 )") 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:21.043 "params": { 00:43:21.043 "name": "Nvme0", 00:43:21.043 "trtype": "tcp", 00:43:21.043 "traddr": "10.0.0.2", 00:43:21.043 "adrfam": "ipv4", 00:43:21.043 "trsvcid": "4420", 00:43:21.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:21.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:21.043 "hdgst": false, 00:43:21.043 "ddgst": false 00:43:21.043 }, 00:43:21.043 "method": "bdev_nvme_attach_controller" 00:43:21.043 },{ 00:43:21.043 "params": { 00:43:21.043 "name": "Nvme1", 00:43:21.043 "trtype": "tcp", 00:43:21.043 "traddr": "10.0.0.2", 00:43:21.043 "adrfam": "ipv4", 00:43:21.043 "trsvcid": "4420", 00:43:21.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:21.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:21.043 "hdgst": false, 00:43:21.043 "ddgst": false 00:43:21.043 }, 00:43:21.043 "method": "bdev_nvme_attach_controller" 00:43:21.043 }' 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:21.043 09:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.301 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:21.301 ... 00:43:21.301 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:21.301 ... 00:43:21.301 fio-3.35 00:43:21.301 Starting 4 threads 00:43:27.862 00:43:27.862 filename0: (groupid=0, jobs=1): err= 0: pid=3213717: Sun Nov 17 09:43:32 2024 00:43:27.862 read: IOPS=1409, BW=11.0MiB/s (11.5MB/s)(55.1MiB/5002msec) 00:43:27.862 slat (nsec): min=6814, max=63647, avg=20773.08, stdev=7236.95 00:43:27.862 clat (usec): min=1069, max=11897, avg=5592.53, stdev=736.06 00:43:27.862 lat (usec): min=1088, max=11918, avg=5613.31, stdev=736.16 00:43:27.862 clat percentiles (usec): 00:43:27.862 | 1.00th=[ 3195], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 5276], 00:43:27.862 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:43:27.862 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6325], 00:43:27.862 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[11600], 99.95th=[11600], 00:43:27.862 | 99.99th=[11863] 00:43:27.862 bw ( KiB/s): min=10752, max=11568, per=25.06%, avg=11299.56, stdev=279.97, samples=9 00:43:27.862 iops : min= 1344, max= 1446, avg=1412.44, stdev=35.00, samples=9 00:43:27.862 lat (msec) : 2=0.40%, 4=1.01%, 10=98.33%, 20=0.27% 00:43:27.862 cpu : usr=94.88%, sys=4.48%, ctx=9, majf=0, minf=1634 00:43:27.862 IO depths : 1=0.9%, 2=20.8%, 4=53.2%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 issued rwts: total=7052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:27.862 filename0: (groupid=0, jobs=1): err= 0: pid=3213718: Sun Nov 17 09:43:32 2024 00:43:27.862 read: IOPS=1427, BW=11.2MiB/s (11.7MB/s)(55.8MiB/5004msec) 00:43:27.862 slat (nsec): min=6655, max=66633, avg=16675.31, stdev=6561.35 00:43:27.862 clat (usec): min=1142, max=10228, avg=5543.88, stdev=602.79 00:43:27.862 lat (usec): min=1160, max=10240, avg=5560.56, stdev=602.86 00:43:27.862 clat percentiles (usec): 00:43:27.862 | 1.00th=[ 2999], 5.00th=[ 4817], 10.00th=[ 5080], 20.00th=[ 5342], 00:43:27.862 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5604], 00:43:27.862 | 70.00th=[ 5735], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6194], 00:43:27.862 | 99.00th=[ 7111], 99.50th=[ 8225], 99.90th=[ 9765], 99.95th=[10028], 00:43:27.862 | 99.99th=[10290] 00:43:27.862 bw ( KiB/s): min=10992, max=12000, per=25.33%, avg=11424.00, stdev=304.51, samples=10 00:43:27.862 iops : min= 1374, max= 1500, avg=1428.00, stdev=38.06, samples=10 00:43:27.862 lat (msec) : 2=0.32%, 4=1.47%, 10=98.15%, 20=0.06% 00:43:27.862 cpu : usr=93.44%, sys=5.94%, ctx=10, majf=0, minf=1639 00:43:27.862 IO depths : 1=1.2%, 2=12.2%, 4=61.3%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 issued rwts: total=7142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:27.862 filename1: (groupid=0, jobs=1): err= 0: pid=3213719: Sun Nov 17 09:43:32 2024 00:43:27.862 read: IOPS=1402, BW=11.0MiB/s (11.5MB/s)(54.8MiB/5003msec) 00:43:27.862 slat (nsec): min=6840, max=95851, avg=20931.27, stdev=8943.27 00:43:27.862 clat (usec): min=1084, max=13014, avg=5617.43, stdev=967.38 00:43:27.862 lat (usec): min=1103, max=13036, avg=5638.36, stdev=967.34 00:43:27.862 clat percentiles (usec): 00:43:27.862 | 1.00th=[ 1942], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5342], 00:43:27.862 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:43:27.862 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 6063], 95.00th=[ 6652], 00:43:27.862 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[12649], 99.95th=[12649], 00:43:27.862 | 99.99th=[13042] 00:43:27.862 bw ( KiB/s): min=10528, max=11552, per=24.88%, avg=11222.90, stdev=333.02, samples=10 00:43:27.862 iops : min= 1316, max= 1444, avg=1402.80, stdev=41.69, samples=10 00:43:27.862 lat (msec) : 2=1.08%, 4=1.50%, 10=96.88%, 20=0.54% 00:43:27.862 cpu : usr=94.78%, sys=4.62%, ctx=8, majf=0, minf=1636 00:43:27.862 IO depths : 1=1.2%, 2=20.8%, 4=53.0%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 issued rwts: total=7019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:27.862 filename1: (groupid=0, jobs=1): err= 0: pid=3213720: Sun Nov 17 09:43:32 2024 00:43:27.862 read: IOPS=1398, BW=10.9MiB/s (11.5MB/s)(54.6MiB/5002msec) 00:43:27.862 slat (nsec): min=7370, max=92892, avg=20496.95, stdev=9010.54 00:43:27.862 clat (usec): min=1098, max=13766, avg=5642.85, stdev=771.85 00:43:27.862 lat (usec): min=1117, max=13789, avg=5663.34, stdev=771.57 00:43:27.862 clat percentiles (usec): 00:43:27.862 | 1.00th=[ 3392], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 5342], 00:43:27.862 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5604], 60.00th=[ 5669], 00:43:27.862 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 6128], 95.00th=[ 6587], 00:43:27.862 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[13435], 99.95th=[13435], 00:43:27.862 | 99.99th=[13829] 00:43:27.862 bw ( KiB/s): min=10560, max=11520, per=24.91%, avg=11235.56, stdev=345.83, samples=9 00:43:27.862 iops : min= 1320, max= 1440, avg=1404.44, stdev=43.23, samples=9 00:43:27.862 lat (msec) : 2=0.26%, 4=1.19%, 10=98.28%, 20=0.27% 00:43:27.862 cpu : usr=93.98%, sys=5.44%, ctx=8, majf=0, minf=1636 00:43:27.862 IO depths : 1=1.0%, 2=19.6%, 4=54.4%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:27.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:27.862 issued rwts: total=6995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:27.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:27.862 00:43:27.862 Run status group 0 (all jobs): 00:43:27.862 READ: bw=44.0MiB/s (46.2MB/s), 10.9MiB/s-11.2MiB/s (11.5MB/s-11.7MB/s), io=220MiB (231MB), run=5002-5004msec 00:43:28.797 ----------------------------------------------------- 00:43:28.797 Suppressions used: 00:43:28.797 count bytes template 00:43:28.797 6 52 /usr/src/fio/parse.c 00:43:28.797 1 8 libtcmalloc_minimal.so 00:43:28.797 1 904 libcrypto.so 00:43:28.797 ----------------------------------------------------- 00:43:28.797 00:43:28.797 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 00:43:28.798 real 0m28.128s 00:43:28.798 user 4m33.970s 00:43:28.798 sys 0m8.412s 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 ************************************ 00:43:28.798 END TEST fio_dif_rand_params 00:43:28.798 ************************************ 00:43:28.798 09:43:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:28.798 09:43:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:28.798 09:43:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 ************************************ 00:43:28.798 START TEST fio_dif_digest 00:43:28.798 ************************************ 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 bdev_null0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:28.798 [2024-11-17 09:43:33.608304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:28.798 { 00:43:28.798 "params": { 00:43:28.798 "name": "Nvme$subsystem", 00:43:28.798 "trtype": "$TEST_TRANSPORT", 00:43:28.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:28.798 "adrfam": "ipv4", 00:43:28.798 "trsvcid": "$NVMF_PORT", 00:43:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:28.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:28.798 "hdgst": ${hdgst:-false}, 00:43:28.798 "ddgst": ${ddgst:-false} 00:43:28.798 }, 00:43:28.798 "method": "bdev_nvme_attach_controller" 00:43:28.798 } 00:43:28.798 EOF 00:43:28.798 )") 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:28.798 "params": { 00:43:28.798 "name": "Nvme0", 00:43:28.798 "trtype": "tcp", 00:43:28.798 "traddr": "10.0.0.2", 00:43:28.798 "adrfam": "ipv4", 00:43:28.798 "trsvcid": "4420", 00:43:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:28.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:28.798 "hdgst": true, 00:43:28.798 "ddgst": true 00:43:28.798 }, 00:43:28.798 "method": "bdev_nvme_attach_controller" 00:43:28.798 }' 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:28.798 09:43:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:29.059 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:29.059 ... 00:43:29.059 fio-3.35 00:43:29.059 Starting 3 threads 00:43:41.280 00:43:41.280 filename0: (groupid=0, jobs=1): err= 0: pid=3214617: Sun Nov 17 09:43:44 2024 00:43:41.280 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(207MiB/10049msec) 00:43:41.280 slat (nsec): min=7268, max=55806, avg=22262.44, stdev=5231.40 00:43:41.280 clat (usec): min=14186, max=55321, avg=18128.69, stdev=1770.46 00:43:41.280 lat (usec): min=14219, max=55342, avg=18150.95, stdev=1770.48 00:43:41.280 clat percentiles (usec): 00:43:41.280 | 1.00th=[15270], 5.00th=[16188], 10.00th=[16581], 20.00th=[17171], 00:43:41.280 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:43:41.280 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:43:41.280 | 99.00th=[21365], 99.50th=[22414], 99.90th=[50594], 99.95th=[55313], 00:43:41.280 | 99.99th=[55313] 00:43:41.280 bw ( KiB/s): min=19494, max=22528, per=33.84%, avg=21198.70, stdev=697.89, samples=20 00:43:41.280 iops : min= 152, max= 176, avg=165.60, stdev= 5.49, samples=20 00:43:41.280 lat (msec) : 20=93.12%, 50=6.76%, 100=0.12% 00:43:41.280 cpu : usr=93.51%, sys=5.93%, ctx=15, majf=0, minf=1637 00:43:41.280 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.280 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.280 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:41.280 filename0: (groupid=0, jobs=1): err= 0: pid=3214618: Sun Nov 17 09:43:44 2024 00:43:41.280 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10048msec) 00:43:41.280 slat (nsec): min=7373, max=57716, avg=25206.56, stdev=6175.60 00:43:41.280 clat (usec): min=13198, max=58803, avg=18092.69, stdev=1836.88 00:43:41.280 lat (usec): min=13224, max=58831, avg=18117.89, stdev=1837.54 00:43:41.280 clat percentiles (usec): 00:43:41.280 | 1.00th=[15008], 5.00th=[15926], 10.00th=[16450], 20.00th=[16909], 00:43:41.280 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:43:41.280 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:43:41.281 | 99.00th=[21365], 99.50th=[21890], 99.90th=[51643], 99.95th=[58983], 00:43:41.281 | 99.99th=[58983] 00:43:41.281 bw ( KiB/s): min=20224, max=22784, per=33.90%, avg=21235.20, stdev=656.49, samples=20 00:43:41.281 iops : min= 158, max= 178, avg=165.90, stdev= 5.13, samples=20 00:43:41.281 lat (msec) : 20=92.47%, 50=7.41%, 100=0.12% 00:43:41.281 cpu : usr=94.33%, sys=5.08%, ctx=22, majf=0, minf=1634 00:43:41.281 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.281 issued rwts: total=1661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:41.281 filename0: (groupid=0, jobs=1): err= 0: pid=3214619: Sun Nov 17 09:43:44 2024 00:43:41.281 read: IOPS=159, BW=19.9MiB/s (20.9MB/s)(200MiB/10047msec) 00:43:41.281 slat (nsec): min=7460, max=85643, avg=21392.41, stdev=5319.67 00:43:41.281 clat (usec): min=14669, max=54258, avg=18798.43, stdev=1736.48 00:43:41.281 lat (usec): min=14688, max=54276, avg=18819.82, stdev=1736.65 00:43:41.281 clat percentiles (usec): 00:43:41.281 | 1.00th=[15926], 5.00th=[16712], 10.00th=[17171], 20.00th=[17695], 00:43:41.281 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:43:41.281 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20317], 95.00th=[20841], 00:43:41.281 | 99.00th=[21890], 99.50th=[22676], 99.90th=[47973], 99.95th=[54264], 00:43:41.281 | 99.99th=[54264] 00:43:41.281 bw ( KiB/s): min=19200, max=21504, per=32.61%, avg=20428.80, stdev=624.87, samples=20 00:43:41.281 iops : min= 150, max= 168, avg=159.60, stdev= 4.88, samples=20 00:43:41.281 lat (msec) : 20=85.24%, 50=14.70%, 100=0.06% 00:43:41.281 cpu : usr=93.92%, sys=5.54%, ctx=15, majf=0, minf=1634 00:43:41.281 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.281 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.281 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:41.281 00:43:41.281 Run status group 0 (all jobs): 00:43:41.281 READ: bw=61.2MiB/s (64.1MB/s), 19.9MiB/s-20.7MiB/s (20.9MB/s-21.7MB/s), io=615MiB (645MB), run=10047-10049msec 00:43:41.281 ----------------------------------------------------- 00:43:41.281 Suppressions used: 00:43:41.281 count bytes template 00:43:41.281 5 44 /usr/src/fio/parse.c 00:43:41.281 1 8 libtcmalloc_minimal.so 00:43:41.281 1 904 libcrypto.so 00:43:41.281 ----------------------------------------------------- 00:43:41.281 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.281 00:43:41.281 real 0m12.427s 00:43:41.281 user 0m30.569s 00:43:41.281 sys 0m2.104s 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:41.281 09:43:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:41.281 ************************************ 00:43:41.281 END TEST fio_dif_digest 00:43:41.281 ************************************ 00:43:41.281 09:43:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:41.281 09:43:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:41.281 rmmod nvme_tcp 00:43:41.281 rmmod nvme_fabrics 00:43:41.281 rmmod nvme_keyring 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3207691 ']' 00:43:41.281 09:43:46 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3207691 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3207691 ']' 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3207691 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207691 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207691' 00:43:41.281 killing process with pid 3207691 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3207691 00:43:41.281 09:43:46 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3207691 00:43:42.658 09:43:47 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:42.658 09:43:47 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:43.591 Waiting for block devices as requested 00:43:43.591 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:43:43.591 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:43.849 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:43.849 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:43.849 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:44.109 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:44.109 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:44.109 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:44.109 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:44.109 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:44.374 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:44.375 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:44.375 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:44.375 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:44.375 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:44.637 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:44.637 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:44.637 09:43:49 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.637 09:43:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:44.637 09:43:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:47.177 09:43:51 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:47.177 00:43:47.177 real 1m16.508s 00:43:47.177 user 6m44.815s 00:43:47.177 sys 0m19.459s 00:43:47.177 09:43:51 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:47.177 09:43:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:47.177 ************************************ 00:43:47.177 END TEST nvmf_dif 00:43:47.177 ************************************ 00:43:47.177 09:43:51 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:47.177 09:43:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:47.177 09:43:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:47.177 09:43:51 -- common/autotest_common.sh@10 -- # set +x 00:43:47.177 ************************************ 00:43:47.177 START TEST nvmf_abort_qd_sizes 00:43:47.177 ************************************ 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:47.177 * Looking for test storage... 00:43:47.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:47.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.177 --rc genhtml_branch_coverage=1 00:43:47.177 --rc genhtml_function_coverage=1 00:43:47.177 --rc genhtml_legend=1 00:43:47.177 --rc geninfo_all_blocks=1 00:43:47.177 --rc geninfo_unexecuted_blocks=1 00:43:47.177 00:43:47.177 ' 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:47.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.177 --rc genhtml_branch_coverage=1 00:43:47.177 --rc genhtml_function_coverage=1 00:43:47.177 --rc genhtml_legend=1 00:43:47.177 --rc geninfo_all_blocks=1 00:43:47.177 --rc geninfo_unexecuted_blocks=1 00:43:47.177 00:43:47.177 ' 00:43:47.177 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:47.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.177 --rc genhtml_branch_coverage=1 00:43:47.177 --rc genhtml_function_coverage=1 00:43:47.178 --rc genhtml_legend=1 00:43:47.178 --rc geninfo_all_blocks=1 00:43:47.178 --rc geninfo_unexecuted_blocks=1 00:43:47.178 00:43:47.178 ' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:47.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.178 --rc genhtml_branch_coverage=1 00:43:47.178 --rc genhtml_function_coverage=1 00:43:47.178 --rc genhtml_legend=1 00:43:47.178 --rc geninfo_all_blocks=1 00:43:47.178 --rc geninfo_unexecuted_blocks=1 00:43:47.178 00:43:47.178 ' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:47.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:47.178 09:43:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:49.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:49.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:49.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:49.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:49.081 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:49.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:49.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:43:49.082 00:43:49.082 --- 10.0.0.2 ping statistics --- 00:43:49.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:49.082 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:43:49.082 09:43:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:49.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:49.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:43:49.082 00:43:49.082 --- 10.0.0.1 ping statistics --- 00:43:49.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:49.082 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:43:49.082 09:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:49.082 09:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:49.082 09:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:49.082 09:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:50.459 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:50.459 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:50.459 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:51.395 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3219669 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3219669 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3219669 ']' 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:51.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:51.395 09:43:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:51.653 [2024-11-17 09:43:56.442497] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:43:51.653 [2024-11-17 09:43:56.442660] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:51.653 [2024-11-17 09:43:56.590987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:51.911 [2024-11-17 09:43:56.733934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:51.911 [2024-11-17 09:43:56.734015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:51.911 [2024-11-17 09:43:56.734049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:51.911 [2024-11-17 09:43:56.734072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:51.911 [2024-11-17 09:43:56.734092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:51.911 [2024-11-17 09:43:56.736859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:51.911 [2024-11-17 09:43:56.736929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:51.911 [2024-11-17 09:43:56.737016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:51.911 [2024-11-17 09:43:56.737022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:52.479 09:43:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:52.479 ************************************ 00:43:52.479 START TEST spdk_target_abort 00:43:52.479 ************************************ 00:43:52.479 09:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:52.479 09:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:52.479 09:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:43:52.479 09:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.479 09:43:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.767 spdk_targetn1 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.767 [2024-11-17 09:44:00.350451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.767 [2024-11-17 09:44:00.397585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:55.767 09:44:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:59.053 Initializing NVMe Controllers 00:43:59.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:59.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:59.053 Initialization complete. Launching workers. 00:43:59.053 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9313, failed: 0 00:43:59.053 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1162, failed to submit 8151 00:43:59.053 success 708, unsuccessful 454, failed 0 00:43:59.053 09:44:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:59.053 09:44:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:02.342 Initializing NVMe Controllers 00:44:02.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:02.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:02.342 Initialization complete. Launching workers. 00:44:02.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8550, failed: 0 00:44:02.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1231, failed to submit 7319 00:44:02.342 success 324, unsuccessful 907, failed 0 00:44:02.342 09:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:02.342 09:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:05.629 Initializing NVMe Controllers 00:44:05.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:05.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:05.629 Initialization complete. Launching workers. 00:44:05.629 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27632, failed: 0 00:44:05.629 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2728, failed to submit 24904 00:44:05.629 success 227, unsuccessful 2501, failed 0 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.629 09:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3219669 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3219669 ']' 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3219669 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3219669 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3219669' 00:44:07.007 killing process with pid 3219669 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3219669 00:44:07.007 09:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3219669 00:44:07.944 00:44:07.944 real 0m15.323s 00:44:07.944 user 0m59.997s 00:44:07.944 sys 0m2.735s 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:07.944 ************************************ 00:44:07.944 END TEST spdk_target_abort 00:44:07.944 ************************************ 00:44:07.944 09:44:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:07.944 09:44:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:07.944 09:44:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:07.944 09:44:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:07.944 ************************************ 00:44:07.944 START TEST kernel_target_abort 00:44:07.944 ************************************ 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:07.944 09:44:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:08.881 Waiting for block devices as requested 00:44:08.881 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:09.140 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:09.140 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:09.400 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:09.400 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:09.400 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:09.400 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:09.659 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:09.659 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:09.659 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:09.659 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:09.918 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:09.918 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:09.918 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:09.918 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:10.176 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:10.176 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:10.434 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:10.693 No valid GPT data, bailing 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:44:10.693 00:44:10.693 Discovery Log Number of Records 2, Generation counter 2 00:44:10.693 =====Discovery Log Entry 0====== 00:44:10.693 trtype: tcp 00:44:10.693 adrfam: ipv4 00:44:10.693 subtype: current discovery subsystem 00:44:10.693 treq: not specified, sq flow control disable supported 00:44:10.693 portid: 1 00:44:10.693 trsvcid: 4420 00:44:10.693 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:10.693 traddr: 10.0.0.1 00:44:10.693 eflags: none 00:44:10.693 sectype: none 00:44:10.693 =====Discovery Log Entry 1====== 00:44:10.693 trtype: tcp 00:44:10.693 adrfam: ipv4 00:44:10.693 subtype: nvme subsystem 00:44:10.693 treq: not specified, sq flow control disable supported 00:44:10.693 portid: 1 00:44:10.693 trsvcid: 4420 00:44:10.693 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:10.693 traddr: 10.0.0.1 00:44:10.693 eflags: none 00:44:10.693 sectype: none 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:10.693 09:44:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:13.976 Initializing NVMe Controllers 00:44:13.976 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:13.976 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:13.976 Initialization complete. Launching workers. 00:44:13.976 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38030, failed: 0 00:44:13.976 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38030, failed to submit 0 00:44:13.976 success 0, unsuccessful 38030, failed 0 00:44:13.976 09:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:13.976 09:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:17.258 Initializing NVMe Controllers 00:44:17.258 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:17.258 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:17.258 Initialization complete. Launching workers. 00:44:17.258 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70171, failed: 0 00:44:17.258 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17682, failed to submit 52489 00:44:17.258 success 0, unsuccessful 17682, failed 0 00:44:17.258 09:44:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:17.258 09:44:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:20.539 Initializing NVMe Controllers 00:44:20.539 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:20.539 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:20.539 Initialization complete. Launching workers. 00:44:20.539 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63199, failed: 0 00:44:20.539 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15794, failed to submit 47405 00:44:20.539 success 0, unsuccessful 15794, failed 0 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:20.539 09:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:21.474 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:21.474 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:21.474 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:22.410 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:22.668 00:44:22.668 real 0m14.634s 00:44:22.668 user 0m7.172s 00:44:22.668 sys 0m3.294s 00:44:22.668 09:44:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:22.668 09:44:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:22.668 ************************************ 00:44:22.668 END TEST kernel_target_abort 00:44:22.668 ************************************ 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:22.668 rmmod nvme_tcp 00:44:22.668 rmmod nvme_fabrics 00:44:22.668 rmmod nvme_keyring 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3219669 ']' 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3219669 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3219669 ']' 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3219669 00:44:22.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3219669) - No such process 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3219669 is not found' 00:44:22.668 Process with pid 3219669 is not found 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:22.668 09:44:27 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:24.044 Waiting for block devices as requested 00:44:24.044 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:24.044 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:24.044 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:24.044 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:24.302 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:24.302 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:24.302 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:24.302 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:24.559 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:24.559 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:24.559 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:24.559 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:24.816 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:24.816 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:24.816 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:25.074 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:25.074 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:25.074 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:25.074 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:25.075 09:44:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:27.069 09:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:27.069 00:44:27.069 real 0m40.357s 00:44:27.069 user 1m9.692s 00:44:27.069 sys 0m9.582s 00:44:27.069 09:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:27.069 09:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:27.069 ************************************ 00:44:27.069 END TEST nvmf_abort_qd_sizes 00:44:27.069 ************************************ 00:44:27.328 09:44:32 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:27.328 09:44:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:27.328 09:44:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:27.328 09:44:32 -- common/autotest_common.sh@10 -- # set +x 00:44:27.328 ************************************ 00:44:27.328 START TEST keyring_file 00:44:27.328 ************************************ 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:27.328 * Looking for test storage... 00:44:27.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:27.328 09:44:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.328 --rc genhtml_branch_coverage=1 00:44:27.328 --rc genhtml_function_coverage=1 00:44:27.328 --rc genhtml_legend=1 00:44:27.328 --rc geninfo_all_blocks=1 00:44:27.328 --rc geninfo_unexecuted_blocks=1 00:44:27.328 00:44:27.328 ' 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.328 --rc genhtml_branch_coverage=1 00:44:27.328 --rc genhtml_function_coverage=1 00:44:27.328 --rc genhtml_legend=1 00:44:27.328 --rc geninfo_all_blocks=1 00:44:27.328 --rc geninfo_unexecuted_blocks=1 00:44:27.328 00:44:27.328 ' 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.328 --rc genhtml_branch_coverage=1 00:44:27.328 --rc genhtml_function_coverage=1 00:44:27.328 --rc genhtml_legend=1 00:44:27.328 --rc geninfo_all_blocks=1 00:44:27.328 --rc geninfo_unexecuted_blocks=1 00:44:27.328 00:44:27.328 ' 00:44:27.328 09:44:32 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.328 --rc genhtml_branch_coverage=1 00:44:27.328 --rc genhtml_function_coverage=1 00:44:27.328 --rc genhtml_legend=1 00:44:27.328 --rc geninfo_all_blocks=1 00:44:27.328 --rc geninfo_unexecuted_blocks=1 00:44:27.328 00:44:27.328 ' 00:44:27.328 09:44:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:27.328 09:44:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:27.328 09:44:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:27.329 09:44:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:27.329 09:44:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:27.329 09:44:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:27.329 09:44:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:27.329 09:44:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.329 09:44:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.329 09:44:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.329 09:44:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:27.329 09:44:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:27.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:27.329 09:44:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:27.329 09:44:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:27.329 09:44:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:27.329 09:44:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:27.329 09:44:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:27.329 09:44:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4E0Vb1oEpt 00:44:27.329 09:44:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:27.329 09:44:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4E0Vb1oEpt 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4E0Vb1oEpt 00:44:27.587 09:44:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4E0Vb1oEpt 00:44:27.587 09:44:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VNEPBo3X5Y 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:27.587 09:44:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:27.587 09:44:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:27.587 09:44:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:27.587 09:44:32 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:27.587 09:44:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:27.587 09:44:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VNEPBo3X5Y 00:44:27.587 09:44:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VNEPBo3X5Y 00:44:27.587 09:44:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VNEPBo3X5Y 00:44:27.587 09:44:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=3225890 00:44:27.587 09:44:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:27.587 09:44:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3225890 00:44:27.587 09:44:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3225890 ']' 00:44:27.587 09:44:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:27.587 09:44:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:27.587 09:44:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:27.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:27.587 09:44:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:27.587 09:44:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:27.587 [2024-11-17 09:44:32.489230] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:27.587 [2024-11-17 09:44:32.489402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225890 ] 00:44:27.845 [2024-11-17 09:44:32.626386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.845 [2024-11-17 09:44:32.745383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:28.781 09:44:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:28.781 [2024-11-17 09:44:33.592690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:28.781 null0 00:44:28.781 [2024-11-17 09:44:33.624663] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:28.781 [2024-11-17 09:44:33.625189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.781 09:44:33 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:28.781 [2024-11-17 09:44:33.652716] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:28.781 request: 00:44:28.781 { 00:44:28.781 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:28.781 "secure_channel": false, 00:44:28.781 "listen_address": { 00:44:28.781 "trtype": "tcp", 00:44:28.781 "traddr": "127.0.0.1", 00:44:28.781 "trsvcid": "4420" 00:44:28.781 }, 00:44:28.781 "method": "nvmf_subsystem_add_listener", 00:44:28.781 "req_id": 1 00:44:28.781 } 00:44:28.781 Got JSON-RPC error response 00:44:28.781 response: 00:44:28.781 { 00:44:28.781 "code": -32602, 00:44:28.781 "message": "Invalid parameters" 00:44:28.781 } 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:28.781 09:44:33 keyring_file -- keyring/file.sh@47 -- # bperfpid=3226033 00:44:28.781 09:44:33 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3226033 /var/tmp/bperf.sock 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3226033 ']' 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:28.781 09:44:33 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:28.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:28.781 09:44:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:28.781 [2024-11-17 09:44:33.742608] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:28.781 [2024-11-17 09:44:33.742786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226033 ] 00:44:29.040 [2024-11-17 09:44:33.876719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.040 [2024-11-17 09:44:34.012718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:29.974 09:44:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:29.974 09:44:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:29.974 09:44:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:29.974 09:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:29.974 09:44:34 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VNEPBo3X5Y 00:44:29.974 09:44:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VNEPBo3X5Y 00:44:30.540 09:44:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:30.540 09:44:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:30.540 09:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.540 09:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.540 09:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:30.540 09:44:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.4E0Vb1oEpt == \/\t\m\p\/\t\m\p\.\4\E\0\V\b\1\o\E\p\t ]] 00:44:30.540 09:44:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:30.540 09:44:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:30.540 09:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.540 09:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.540 09:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:30.798 09:44:35 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.VNEPBo3X5Y == \/\t\m\p\/\t\m\p\.\V\N\E\P\B\o\3\X\5\Y ]] 00:44:30.798 09:44:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:30.798 09:44:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:30.798 09:44:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.798 09:44:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.798 09:44:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.798 09:44:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:31.364 09:44:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:31.364 09:44:36 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:31.364 09:44:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:31.364 09:44:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:31.364 09:44:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:31.364 09:44:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:31.364 09:44:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.364 09:44:36 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:31.364 09:44:36 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:31.364 09:44:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:31.622 [2024-11-17 09:44:36.615954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:31.880 nvme0n1 00:44:31.880 09:44:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:31.880 09:44:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:31.880 09:44:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:31.880 09:44:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:31.880 09:44:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.880 09:44:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.137 09:44:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:32.137 09:44:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:32.137 09:44:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:32.137 09:44:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.137 09:44:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.137 09:44:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.137 09:44:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:32.395 09:44:37 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:32.395 09:44:37 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:32.395 Running I/O for 1 seconds... 00:44:33.768 6468.00 IOPS, 25.27 MiB/s 00:44:33.768 Latency(us) 00:44:33.768 [2024-11-17T08:44:38.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.768 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:33.768 nvme0n1 : 1.01 6519.85 25.47 0.00 0.00 19541.24 11747.93 34952.53 00:44:33.768 [2024-11-17T08:44:38.781Z] =================================================================================================================== 00:44:33.768 [2024-11-17T08:44:38.781Z] Total : 6519.85 25.47 0.00 0.00 19541.24 11747.93 34952.53 00:44:33.768 { 00:44:33.768 "results": [ 00:44:33.768 { 00:44:33.768 "job": "nvme0n1", 00:44:33.768 "core_mask": "0x2", 00:44:33.768 "workload": "randrw", 00:44:33.768 "percentage": 50, 00:44:33.768 "status": "finished", 00:44:33.768 "queue_depth": 128, 00:44:33.768 "io_size": 4096, 00:44:33.768 "runtime": 1.011679, 00:44:33.769 "iops": 6519.854617917344, 00:44:33.769 "mibps": 25.468182101239623, 00:44:33.769 "io_failed": 0, 00:44:33.769 "io_timeout": 0, 00:44:33.769 "avg_latency_us": 19541.24386586708, 00:44:33.769 "min_latency_us": 11747.934814814815, 00:44:33.769 "max_latency_us": 34952.53333333333 00:44:33.769 } 00:44:33.769 ], 00:44:33.769 "core_count": 1 00:44:33.769 } 00:44:33.769 09:44:38 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:33.769 09:44:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:33.769 09:44:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:33.769 09:44:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:33.769 09:44:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:33.769 09:44:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.769 09:44:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:33.769 09:44:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.026 09:44:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:34.026 09:44:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:34.026 09:44:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:34.026 09:44:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:34.026 09:44:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:34.026 09:44:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.026 09:44:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:34.284 09:44:39 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:34.284 09:44:39 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:34.284 09:44:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:34.284 09:44:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:34.542 [2024-11-17 09:44:39.502080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:34.542 [2024-11-17 09:44:39.502325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:44:34.542 [2024-11-17 09:44:39.503292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:44:34.542 [2024-11-17 09:44:39.504290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:34.542 [2024-11-17 09:44:39.504327] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:34.542 [2024-11-17 09:44:39.504353] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:34.542 [2024-11-17 09:44:39.504392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:34.542 request: 00:44:34.542 { 00:44:34.542 "name": "nvme0", 00:44:34.542 "trtype": "tcp", 00:44:34.542 "traddr": "127.0.0.1", 00:44:34.542 "adrfam": "ipv4", 00:44:34.542 "trsvcid": "4420", 00:44:34.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:34.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:34.542 "prchk_reftag": false, 00:44:34.542 "prchk_guard": false, 00:44:34.542 "hdgst": false, 00:44:34.542 "ddgst": false, 00:44:34.542 "psk": "key1", 00:44:34.542 "allow_unrecognized_csi": false, 00:44:34.542 "method": "bdev_nvme_attach_controller", 00:44:34.542 "req_id": 1 00:44:34.542 } 00:44:34.542 Got JSON-RPC error response 00:44:34.542 response: 00:44:34.542 { 00:44:34.542 "code": -5, 00:44:34.542 "message": "Input/output error" 00:44:34.542 } 00:44:34.542 09:44:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:34.542 09:44:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:34.542 09:44:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:34.542 09:44:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:34.542 09:44:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:34.542 09:44:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:34.542 09:44:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:34.542 09:44:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:34.542 09:44:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.542 09:44:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:34.800 09:44:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:34.800 09:44:39 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:34.800 09:44:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:34.800 09:44:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:34.800 09:44:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:34.800 09:44:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:34.800 09:44:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.366 09:44:40 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:35.366 09:44:40 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:35.366 09:44:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:35.624 09:44:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:35.624 09:44:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:35.882 09:44:40 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:35.882 09:44:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.882 09:44:40 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:36.140 09:44:40 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:36.140 09:44:40 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.4E0Vb1oEpt 00:44:36.140 09:44:40 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:36.140 09:44:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:36.140 09:44:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:36.398 [2024-11-17 09:44:41.200561] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4E0Vb1oEpt': 0100660 00:44:36.398 [2024-11-17 09:44:41.200616] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:36.398 request: 00:44:36.398 { 00:44:36.398 "name": "key0", 00:44:36.398 "path": "/tmp/tmp.4E0Vb1oEpt", 00:44:36.398 "method": "keyring_file_add_key", 00:44:36.398 "req_id": 1 00:44:36.398 } 00:44:36.398 Got JSON-RPC error response 00:44:36.398 response: 00:44:36.398 { 00:44:36.398 "code": -1, 00:44:36.398 "message": "Operation not permitted" 00:44:36.398 } 00:44:36.398 09:44:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:36.398 09:44:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:36.398 09:44:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:36.398 09:44:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:36.398 09:44:41 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.4E0Vb1oEpt 00:44:36.398 09:44:41 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:36.398 09:44:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4E0Vb1oEpt 00:44:36.656 09:44:41 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.4E0Vb1oEpt 00:44:36.656 09:44:41 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:36.656 09:44:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:36.656 09:44:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:36.656 09:44:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.656 09:44:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.656 09:44:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.913 09:44:41 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:36.913 09:44:41 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:36.913 09:44:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:36.913 09:44:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.171 [2024-11-17 09:44:42.010892] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4E0Vb1oEpt': No such file or directory 00:44:37.171 [2024-11-17 09:44:42.010949] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:37.171 [2024-11-17 09:44:42.010999] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:37.171 [2024-11-17 09:44:42.011024] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:37.171 [2024-11-17 09:44:42.011048] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:37.171 [2024-11-17 09:44:42.011072] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:37.171 request: 00:44:37.171 { 00:44:37.171 "name": "nvme0", 00:44:37.171 "trtype": "tcp", 00:44:37.171 "traddr": "127.0.0.1", 00:44:37.171 "adrfam": "ipv4", 00:44:37.171 "trsvcid": "4420", 00:44:37.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:37.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:37.171 "prchk_reftag": false, 00:44:37.171 "prchk_guard": false, 00:44:37.171 "hdgst": false, 00:44:37.171 "ddgst": false, 00:44:37.171 "psk": "key0", 00:44:37.171 "allow_unrecognized_csi": false, 00:44:37.171 "method": "bdev_nvme_attach_controller", 00:44:37.171 "req_id": 1 00:44:37.171 } 00:44:37.171 Got JSON-RPC error response 00:44:37.171 response: 00:44:37.171 { 00:44:37.172 "code": -19, 00:44:37.172 "message": "No such device" 00:44:37.172 } 00:44:37.172 09:44:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:37.172 09:44:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:37.172 09:44:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:37.172 09:44:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:37.172 09:44:42 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:37.172 09:44:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:37.429 09:44:42 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:37.429 09:44:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:37.429 09:44:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:37.429 09:44:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BZSzccHxzF 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:37.430 09:44:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:37.430 09:44:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:37.430 09:44:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:37.430 09:44:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:37.430 09:44:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:37.430 09:44:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BZSzccHxzF 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BZSzccHxzF 00:44:37.430 09:44:42 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.BZSzccHxzF 00:44:37.430 09:44:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZSzccHxzF 00:44:37.430 09:44:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZSzccHxzF 00:44:37.687 09:44:42 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.688 09:44:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:38.253 nvme0n1 00:44:38.253 09:44:42 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:38.254 09:44:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:38.254 09:44:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:38.254 09:44:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:38.254 09:44:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:38.254 09:44:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.254 09:44:43 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:38.254 09:44:43 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:38.254 09:44:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:38.819 09:44:43 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:38.819 09:44:43 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:38.819 09:44:43 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:38.819 09:44:43 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:38.819 09:44:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.385 09:44:44 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:39.385 09:44:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:39.385 09:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:39.385 09:44:44 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:39.385 09:44:44 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:39.385 09:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.951 09:44:44 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:39.951 09:44:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZSzccHxzF 00:44:39.951 09:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZSzccHxzF 00:44:39.951 09:44:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VNEPBo3X5Y 00:44:39.951 09:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VNEPBo3X5Y 00:44:40.208 09:44:45 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:40.208 09:44:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:40.774 nvme0n1 00:44:40.774 09:44:45 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:40.774 09:44:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:41.032 09:44:45 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:41.032 "subsystems": [ 00:44:41.032 { 00:44:41.032 "subsystem": "keyring", 00:44:41.032 "config": [ 00:44:41.032 { 00:44:41.032 "method": "keyring_file_add_key", 00:44:41.032 "params": { 00:44:41.032 "name": "key0", 00:44:41.032 "path": "/tmp/tmp.BZSzccHxzF" 00:44:41.032 } 00:44:41.032 }, 00:44:41.032 { 00:44:41.032 "method": "keyring_file_add_key", 00:44:41.032 "params": { 00:44:41.032 "name": "key1", 00:44:41.032 "path": "/tmp/tmp.VNEPBo3X5Y" 00:44:41.032 } 00:44:41.032 } 00:44:41.032 ] 00:44:41.032 }, 00:44:41.032 { 00:44:41.032 "subsystem": "iobuf", 00:44:41.032 "config": [ 00:44:41.032 { 00:44:41.032 "method": "iobuf_set_options", 00:44:41.032 "params": { 00:44:41.032 "small_pool_count": 8192, 00:44:41.032 "large_pool_count": 1024, 00:44:41.032 "small_bufsize": 8192, 00:44:41.032 "large_bufsize": 135168, 00:44:41.032 "enable_numa": false 00:44:41.033 } 00:44:41.033 } 00:44:41.033 ] 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "subsystem": "sock", 00:44:41.033 "config": [ 00:44:41.033 { 00:44:41.033 "method": "sock_set_default_impl", 00:44:41.033 "params": { 00:44:41.033 "impl_name": "posix" 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "sock_impl_set_options", 00:44:41.033 "params": { 00:44:41.033 "impl_name": "ssl", 00:44:41.033 "recv_buf_size": 4096, 00:44:41.033 "send_buf_size": 4096, 00:44:41.033 "enable_recv_pipe": true, 00:44:41.033 "enable_quickack": false, 00:44:41.033 "enable_placement_id": 0, 00:44:41.033 "enable_zerocopy_send_server": true, 00:44:41.033 "enable_zerocopy_send_client": false, 00:44:41.033 "zerocopy_threshold": 0, 00:44:41.033 "tls_version": 0, 00:44:41.033 "enable_ktls": false 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "sock_impl_set_options", 00:44:41.033 "params": { 00:44:41.033 "impl_name": "posix", 00:44:41.033 "recv_buf_size": 2097152, 00:44:41.033 "send_buf_size": 2097152, 00:44:41.033 "enable_recv_pipe": true, 00:44:41.033 "enable_quickack": false, 00:44:41.033 "enable_placement_id": 0, 00:44:41.033 "enable_zerocopy_send_server": true, 00:44:41.033 "enable_zerocopy_send_client": false, 00:44:41.033 "zerocopy_threshold": 0, 00:44:41.033 "tls_version": 0, 00:44:41.033 "enable_ktls": false 00:44:41.033 } 00:44:41.033 } 00:44:41.033 ] 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "subsystem": "vmd", 00:44:41.033 "config": [] 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "subsystem": "accel", 00:44:41.033 "config": [ 00:44:41.033 { 00:44:41.033 "method": "accel_set_options", 00:44:41.033 "params": { 00:44:41.033 "small_cache_size": 128, 00:44:41.033 "large_cache_size": 16, 00:44:41.033 "task_count": 2048, 00:44:41.033 "sequence_count": 2048, 00:44:41.033 "buf_count": 2048 00:44:41.033 } 00:44:41.033 } 00:44:41.033 ] 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "subsystem": "bdev", 00:44:41.033 "config": [ 00:44:41.033 { 00:44:41.033 "method": "bdev_set_options", 00:44:41.033 "params": { 00:44:41.033 "bdev_io_pool_size": 65535, 00:44:41.033 "bdev_io_cache_size": 256, 00:44:41.033 "bdev_auto_examine": true, 00:44:41.033 "iobuf_small_cache_size": 128, 00:44:41.033 "iobuf_large_cache_size": 16 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "bdev_raid_set_options", 00:44:41.033 "params": { 00:44:41.033 "process_window_size_kb": 1024, 00:44:41.033 "process_max_bandwidth_mb_sec": 0 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "bdev_iscsi_set_options", 00:44:41.033 "params": { 00:44:41.033 "timeout_sec": 30 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "bdev_nvme_set_options", 00:44:41.033 "params": { 00:44:41.033 "action_on_timeout": "none", 00:44:41.033 "timeout_us": 0, 00:44:41.033 "timeout_admin_us": 0, 00:44:41.033 "keep_alive_timeout_ms": 10000, 00:44:41.033 "arbitration_burst": 0, 00:44:41.033 "low_priority_weight": 0, 00:44:41.033 "medium_priority_weight": 0, 00:44:41.033 "high_priority_weight": 0, 00:44:41.033 "nvme_adminq_poll_period_us": 10000, 00:44:41.033 "nvme_ioq_poll_period_us": 0, 00:44:41.033 "io_queue_requests": 512, 00:44:41.033 "delay_cmd_submit": true, 00:44:41.033 "transport_retry_count": 4, 00:44:41.033 "bdev_retry_count": 3, 00:44:41.033 "transport_ack_timeout": 0, 00:44:41.033 "ctrlr_loss_timeout_sec": 0, 00:44:41.033 "reconnect_delay_sec": 0, 00:44:41.033 "fast_io_fail_timeout_sec": 0, 00:44:41.033 "disable_auto_failback": false, 00:44:41.033 "generate_uuids": false, 00:44:41.033 "transport_tos": 0, 00:44:41.033 "nvme_error_stat": false, 00:44:41.033 "rdma_srq_size": 0, 00:44:41.033 "io_path_stat": false, 00:44:41.033 "allow_accel_sequence": false, 00:44:41.033 "rdma_max_cq_size": 0, 00:44:41.033 "rdma_cm_event_timeout_ms": 0, 00:44:41.033 "dhchap_digests": [ 00:44:41.033 "sha256", 00:44:41.033 "sha384", 00:44:41.033 "sha512" 00:44:41.033 ], 00:44:41.033 "dhchap_dhgroups": [ 00:44:41.033 "null", 00:44:41.033 "ffdhe2048", 00:44:41.033 "ffdhe3072", 00:44:41.033 "ffdhe4096", 00:44:41.033 "ffdhe6144", 00:44:41.033 "ffdhe8192" 00:44:41.033 ] 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "bdev_nvme_attach_controller", 00:44:41.033 "params": { 00:44:41.033 "name": "nvme0", 00:44:41.033 "trtype": "TCP", 00:44:41.033 "adrfam": "IPv4", 00:44:41.033 "traddr": "127.0.0.1", 00:44:41.033 "trsvcid": "4420", 00:44:41.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:41.033 "prchk_reftag": false, 00:44:41.033 "prchk_guard": false, 00:44:41.033 "ctrlr_loss_timeout_sec": 0, 00:44:41.033 "reconnect_delay_sec": 0, 00:44:41.033 "fast_io_fail_timeout_sec": 0, 00:44:41.033 "psk": "key0", 00:44:41.033 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:41.033 "hdgst": false, 00:44:41.033 "ddgst": false, 00:44:41.033 "multipath": "multipath" 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "bdev_nvme_set_hotplug", 00:44:41.033 "params": { 00:44:41.033 "period_us": 100000, 00:44:41.033 "enable": false 00:44:41.033 } 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "method": "bdev_wait_for_examine" 00:44:41.033 } 00:44:41.033 ] 00:44:41.033 }, 00:44:41.033 { 00:44:41.033 "subsystem": "nbd", 00:44:41.033 "config": [] 00:44:41.033 } 00:44:41.033 ] 00:44:41.033 }' 00:44:41.033 09:44:45 keyring_file -- keyring/file.sh@115 -- # killprocess 3226033 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3226033 ']' 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3226033 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226033 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226033' 00:44:41.033 killing process with pid 3226033 00:44:41.033 09:44:45 keyring_file -- common/autotest_common.sh@973 -- # kill 3226033 00:44:41.033 Received shutdown signal, test time was about 1.000000 seconds 00:44:41.033 00:44:41.033 Latency(us) 00:44:41.033 [2024-11-17T08:44:46.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:41.033 [2024-11-17T08:44:46.046Z] =================================================================================================================== 00:44:41.033 [2024-11-17T08:44:46.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:41.034 09:44:45 keyring_file -- common/autotest_common.sh@978 -- # wait 3226033 00:44:41.967 09:44:46 keyring_file -- keyring/file.sh@118 -- # bperfpid=3227666 00:44:41.967 09:44:46 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3227666 /var/tmp/bperf.sock 00:44:41.967 09:44:46 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3227666 ']' 00:44:41.967 09:44:46 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:41.967 09:44:46 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:41.967 09:44:46 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:41.967 09:44:46 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:41.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:41.967 09:44:46 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:41.967 "subsystems": [ 00:44:41.967 { 00:44:41.967 "subsystem": "keyring", 00:44:41.967 "config": [ 00:44:41.967 { 00:44:41.967 "method": "keyring_file_add_key", 00:44:41.967 "params": { 00:44:41.967 "name": "key0", 00:44:41.967 "path": "/tmp/tmp.BZSzccHxzF" 00:44:41.967 } 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "method": "keyring_file_add_key", 00:44:41.967 "params": { 00:44:41.967 "name": "key1", 00:44:41.967 "path": "/tmp/tmp.VNEPBo3X5Y" 00:44:41.967 } 00:44:41.967 } 00:44:41.967 ] 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "subsystem": "iobuf", 00:44:41.967 "config": [ 00:44:41.967 { 00:44:41.967 "method": "iobuf_set_options", 00:44:41.967 "params": { 00:44:41.967 "small_pool_count": 8192, 00:44:41.967 "large_pool_count": 1024, 00:44:41.967 "small_bufsize": 8192, 00:44:41.967 "large_bufsize": 135168, 00:44:41.967 "enable_numa": false 00:44:41.967 } 00:44:41.967 } 00:44:41.967 ] 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "subsystem": "sock", 00:44:41.967 "config": [ 00:44:41.967 { 00:44:41.967 "method": "sock_set_default_impl", 00:44:41.967 "params": { 00:44:41.967 "impl_name": "posix" 00:44:41.967 } 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "method": "sock_impl_set_options", 00:44:41.967 "params": { 00:44:41.967 "impl_name": "ssl", 00:44:41.967 "recv_buf_size": 4096, 00:44:41.967 "send_buf_size": 4096, 00:44:41.967 "enable_recv_pipe": true, 00:44:41.967 "enable_quickack": false, 00:44:41.967 "enable_placement_id": 0, 00:44:41.967 "enable_zerocopy_send_server": true, 00:44:41.967 "enable_zerocopy_send_client": false, 00:44:41.967 "zerocopy_threshold": 0, 00:44:41.967 "tls_version": 0, 00:44:41.967 "enable_ktls": false 00:44:41.967 } 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "method": "sock_impl_set_options", 00:44:41.967 "params": { 00:44:41.967 "impl_name": "posix", 00:44:41.967 "recv_buf_size": 2097152, 00:44:41.967 "send_buf_size": 2097152, 00:44:41.967 "enable_recv_pipe": true, 00:44:41.967 "enable_quickack": false, 00:44:41.967 "enable_placement_id": 0, 00:44:41.967 "enable_zerocopy_send_server": true, 00:44:41.967 "enable_zerocopy_send_client": false, 00:44:41.967 "zerocopy_threshold": 0, 00:44:41.967 "tls_version": 0, 00:44:41.967 "enable_ktls": false 00:44:41.967 } 00:44:41.967 } 00:44:41.967 ] 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "subsystem": "vmd", 00:44:41.967 "config": [] 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "subsystem": "accel", 00:44:41.967 "config": [ 00:44:41.967 { 00:44:41.967 "method": "accel_set_options", 00:44:41.967 "params": { 00:44:41.967 "small_cache_size": 128, 00:44:41.967 "large_cache_size": 16, 00:44:41.967 "task_count": 2048, 00:44:41.967 "sequence_count": 2048, 00:44:41.967 "buf_count": 2048 00:44:41.967 } 00:44:41.967 } 00:44:41.967 ] 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "subsystem": "bdev", 00:44:41.967 "config": [ 00:44:41.967 { 00:44:41.967 "method": "bdev_set_options", 00:44:41.967 "params": { 00:44:41.967 "bdev_io_pool_size": 65535, 00:44:41.967 "bdev_io_cache_size": 256, 00:44:41.967 "bdev_auto_examine": true, 00:44:41.967 "iobuf_small_cache_size": 128, 00:44:41.967 "iobuf_large_cache_size": 16 00:44:41.967 } 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "method": "bdev_raid_set_options", 00:44:41.967 "params": { 00:44:41.967 "process_window_size_kb": 1024, 00:44:41.967 "process_max_bandwidth_mb_sec": 0 00:44:41.967 } 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "method": "bdev_iscsi_set_options", 00:44:41.967 "params": { 00:44:41.967 "timeout_sec": 30 00:44:41.967 } 00:44:41.967 }, 00:44:41.967 { 00:44:41.967 "method": "bdev_nvme_set_options", 00:44:41.967 "params": { 00:44:41.967 "action_on_timeout": "none", 00:44:41.967 "timeout_us": 0, 00:44:41.967 "timeout_admin_us": 0, 00:44:41.967 "keep_alive_timeout_ms": 10000, 00:44:41.967 "arbitration_burst": 0, 00:44:41.967 "low_priority_weight": 0, 00:44:41.967 "medium_priority_weight": 0, 00:44:41.967 "high_priority_weight": 0, 00:44:41.967 "nvme_adminq_poll_period_us": 10000, 00:44:41.967 "nvme_ioq_poll_period_us": 0, 00:44:41.967 "io_queue_requests": 512, 00:44:41.968 "delay_cmd_submit": true, 00:44:41.968 "transport_retry_count": 4, 00:44:41.968 "bdev_retry_count": 3, 00:44:41.968 "transport_ack_timeout": 0, 00:44:41.968 "ctrlr_loss_timeout_sec": 0, 00:44:41.968 "reconnect_delay_sec": 0, 00:44:41.968 "fast_io_fail_timeout_sec": 0, 00:44:41.968 "disable_auto_failback": false, 00:44:41.968 "generate_uuids": false, 00:44:41.968 "transport_tos": 0, 00:44:41.968 "nvme_error_stat": false, 00:44:41.968 "rdma_srq_size": 0, 00:44:41.968 "io_path_stat": false, 00:44:41.968 "allow_accel_sequence": false, 00:44:41.968 "rdma_max_cq_size": 0, 00:44:41.968 "rdma_cm_event_timeout_ms": 0, 00:44:41.968 "dhchap_digests": [ 00:44:41.968 "sha256", 00:44:41.968 "sha384", 00:44:41.968 "sha512" 00:44:41.968 ], 00:44:41.968 "dhchap_dhgroups": [ 00:44:41.968 "null", 00:44:41.968 "ffdhe2048", 00:44:41.968 "ffdhe3072", 00:44:41.968 "ffdhe4096", 00:44:41.968 "ffdhe6144", 00:44:41.968 "ffdhe8192" 00:44:41.968 ] 00:44:41.968 } 00:44:41.968 }, 00:44:41.968 { 00:44:41.968 "method": "bdev_nvme_attach_controller", 00:44:41.968 "params": { 00:44:41.968 "name": "nvme0", 00:44:41.968 "trtype": "TCP", 00:44:41.968 "adrfam": "IPv4", 00:44:41.968 "traddr": "127.0.0.1", 00:44:41.968 "trsvcid": "4420", 00:44:41.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:41.968 "prchk_reftag": false, 00:44:41.968 "prchk_guard": false, 00:44:41.968 "ctrlr_loss_timeout_sec": 0, 00:44:41.968 "reconnect_delay_sec": 0, 00:44:41.968 "fast_io_fail_timeout_sec": 0, 00:44:41.968 "psk": "key0", 00:44:41.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:41.968 "hdgst": false, 00:44:41.968 "ddgst": false, 00:44:41.968 "multipath": "multipath" 00:44:41.968 } 00:44:41.968 }, 00:44:41.968 { 00:44:41.968 "method": "bdev_nvme_set_hotplug", 00:44:41.968 "params": { 00:44:41.968 "period_us": 100000, 00:44:41.968 "enable": false 00:44:41.968 } 00:44:41.968 }, 00:44:41.968 { 00:44:41.968 "method": "bdev_wait_for_examine" 00:44:41.968 } 00:44:41.968 ] 00:44:41.968 }, 00:44:41.968 { 00:44:41.968 "subsystem": "nbd", 00:44:41.968 "config": [] 00:44:41.968 } 00:44:41.968 ] 00:44:41.968 }' 00:44:41.968 09:44:46 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:41.968 09:44:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:41.968 [2024-11-17 09:44:46.877630] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:41.968 [2024-11-17 09:44:46.877800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227666 ] 00:44:42.226 [2024-11-17 09:44:47.018775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:42.226 [2024-11-17 09:44:47.154664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:42.792 [2024-11-17 09:44:47.601096] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:43.050 09:44:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:43.050 09:44:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:43.050 09:44:47 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:43.050 09:44:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.050 09:44:47 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:43.308 09:44:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:43.308 09:44:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:43.308 09:44:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:43.308 09:44:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.308 09:44:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.308 09:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.308 09:44:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:43.566 09:44:48 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:43.566 09:44:48 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:43.566 09:44:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:43.566 09:44:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.566 09:44:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.566 09:44:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:43.566 09:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.824 09:44:48 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:43.824 09:44:48 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:43.824 09:44:48 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:43.824 09:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:44.082 09:44:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:44.082 09:44:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:44.082 09:44:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BZSzccHxzF /tmp/tmp.VNEPBo3X5Y 00:44:44.082 09:44:48 keyring_file -- keyring/file.sh@20 -- # killprocess 3227666 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3227666 ']' 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3227666 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227666 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227666' 00:44:44.082 killing process with pid 3227666 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@973 -- # kill 3227666 00:44:44.082 Received shutdown signal, test time was about 1.000000 seconds 00:44:44.082 00:44:44.082 Latency(us) 00:44:44.082 [2024-11-17T08:44:49.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:44.082 [2024-11-17T08:44:49.095Z] =================================================================================================================== 00:44:44.082 [2024-11-17T08:44:49.095Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:44.082 09:44:48 keyring_file -- common/autotest_common.sh@978 -- # wait 3227666 00:44:45.014 09:44:49 keyring_file -- keyring/file.sh@21 -- # killprocess 3225890 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3225890 ']' 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3225890 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225890 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225890' 00:44:45.014 killing process with pid 3225890 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@973 -- # kill 3225890 00:44:45.014 09:44:49 keyring_file -- common/autotest_common.sh@978 -- # wait 3225890 00:44:47.544 00:44:47.544 real 0m20.161s 00:44:47.544 user 0m45.809s 00:44:47.544 sys 0m3.746s 00:44:47.544 09:44:52 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:47.544 09:44:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:47.544 ************************************ 00:44:47.544 END TEST keyring_file 00:44:47.544 ************************************ 00:44:47.544 09:44:52 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:47.544 09:44:52 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:47.544 09:44:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:47.544 09:44:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:47.544 09:44:52 -- common/autotest_common.sh@10 -- # set +x 00:44:47.544 ************************************ 00:44:47.544 START TEST keyring_linux 00:44:47.544 ************************************ 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:47.544 Joined session keyring: 540119670 00:44:47.544 * Looking for test storage... 00:44:47.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:47.544 09:44:52 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.544 --rc genhtml_branch_coverage=1 00:44:47.544 --rc genhtml_function_coverage=1 00:44:47.544 --rc genhtml_legend=1 00:44:47.544 --rc geninfo_all_blocks=1 00:44:47.544 --rc geninfo_unexecuted_blocks=1 00:44:47.544 00:44:47.544 ' 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.544 --rc genhtml_branch_coverage=1 00:44:47.544 --rc genhtml_function_coverage=1 00:44:47.544 --rc genhtml_legend=1 00:44:47.544 --rc geninfo_all_blocks=1 00:44:47.544 --rc geninfo_unexecuted_blocks=1 00:44:47.544 00:44:47.544 ' 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.544 --rc genhtml_branch_coverage=1 00:44:47.544 --rc genhtml_function_coverage=1 00:44:47.544 --rc genhtml_legend=1 00:44:47.544 --rc geninfo_all_blocks=1 00:44:47.544 --rc geninfo_unexecuted_blocks=1 00:44:47.544 00:44:47.544 ' 00:44:47.544 09:44:52 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.544 --rc genhtml_branch_coverage=1 00:44:47.544 --rc genhtml_function_coverage=1 00:44:47.544 --rc genhtml_legend=1 00:44:47.544 --rc geninfo_all_blocks=1 00:44:47.544 --rc geninfo_unexecuted_blocks=1 00:44:47.544 00:44:47.544 ' 00:44:47.544 09:44:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:47.544 09:44:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:47.544 09:44:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:47.545 09:44:52 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:47.545 09:44:52 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:47.545 09:44:52 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:47.545 09:44:52 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:47.545 09:44:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.545 09:44:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.545 09:44:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.545 09:44:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:47.545 09:44:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:47.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:47.545 /tmp/:spdk-test:key0 00:44:47.545 09:44:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:47.545 09:44:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:47.545 09:44:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:47.545 /tmp/:spdk-test:key1 00:44:47.804 09:44:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3228413 00:44:47.804 09:44:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:47.804 09:44:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3228413 00:44:47.804 09:44:52 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3228413 ']' 00:44:47.804 09:44:52 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:47.804 09:44:52 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.804 09:44:52 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:47.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:47.805 09:44:52 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.805 09:44:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.805 [2024-11-17 09:44:52.654304] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:47.805 [2024-11-17 09:44:52.654497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228413 ] 00:44:47.805 [2024-11-17 09:44:52.800103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.063 [2024-11-17 09:44:52.937694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:48.998 09:44:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:48.998 [2024-11-17 09:44:53.887521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:48.998 null0 00:44:48.998 [2024-11-17 09:44:53.919534] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:48.998 [2024-11-17 09:44:53.920166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.998 09:44:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:48.998 20675118 00:44:48.998 09:44:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:48.998 654789030 00:44:48.998 09:44:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3228659 00:44:48.998 09:44:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:48.998 09:44:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3228659 /var/tmp/bperf.sock 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3228659 ']' 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:48.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:48.998 09:44:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:49.257 [2024-11-17 09:44:54.025904] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:44:49.257 [2024-11-17 09:44:54.026035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228659 ] 00:44:49.257 [2024-11-17 09:44:54.168542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:49.516 [2024-11-17 09:44:54.305239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:50.083 09:44:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.083 09:44:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:50.083 09:44:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:50.083 09:44:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:50.341 09:44:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:50.341 09:44:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:50.960 09:44:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:50.960 09:44:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:51.219 [2024-11-17 09:44:56.094246] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:51.219 nvme0n1 00:44:51.219 09:44:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:51.219 09:44:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:51.219 09:44:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:51.219 09:44:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:51.219 09:44:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:51.219 09:44:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:51.477 09:44:56 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:51.477 09:44:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:51.477 09:44:56 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:51.477 09:44:56 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:51.477 09:44:56 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:51.477 09:44:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:51.477 09:44:56 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@25 -- # sn=20675118 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 20675118 == \2\0\6\7\5\1\1\8 ]] 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 20675118 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:51.735 09:44:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:51.993 Running I/O for 1 seconds... 00:44:52.928 7460.00 IOPS, 29.14 MiB/s 00:44:52.928 Latency(us) 00:44:52.928 [2024-11-17T08:44:57.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:52.928 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:52.928 nvme0n1 : 1.02 7474.43 29.20 0.00 0.00 16980.94 8107.05 27379.48 00:44:52.928 [2024-11-17T08:44:57.941Z] =================================================================================================================== 00:44:52.928 [2024-11-17T08:44:57.941Z] Total : 7474.43 29.20 0.00 0.00 16980.94 8107.05 27379.48 00:44:52.928 { 00:44:52.928 "results": [ 00:44:52.928 { 00:44:52.928 "job": "nvme0n1", 00:44:52.928 "core_mask": "0x2", 00:44:52.928 "workload": "randread", 00:44:52.928 "status": "finished", 00:44:52.928 "queue_depth": 128, 00:44:52.928 "io_size": 4096, 00:44:52.928 "runtime": 1.015328, 00:44:52.928 "iops": 7474.431907718491, 00:44:52.928 "mibps": 29.196999639525355, 00:44:52.928 "io_failed": 0, 00:44:52.928 "io_timeout": 0, 00:44:52.928 "avg_latency_us": 16980.944578849507, 00:44:52.928 "min_latency_us": 8107.045925925926, 00:44:52.928 "max_latency_us": 27379.484444444446 00:44:52.928 } 00:44:52.928 ], 00:44:52.928 "core_count": 1 00:44:52.928 } 00:44:52.928 09:44:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:52.928 09:44:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:53.186 09:44:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:53.187 09:44:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:53.187 09:44:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:53.187 09:44:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:53.187 09:44:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:53.187 09:44:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:53.445 09:44:58 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:53.445 09:44:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:53.445 09:44:58 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:53.445 09:44:58 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:53.445 09:44:58 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:53.445 09:44:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:53.703 [2024-11-17 09:44:58.711109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:53.704 [2024-11-17 09:44:58.711545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:44:53.704 [2024-11-17 09:44:58.712508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:44:53.704 [2024-11-17 09:44:58.713503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:53.704 [2024-11-17 09:44:58.713532] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:53.704 [2024-11-17 09:44:58.713552] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:53.704 [2024-11-17 09:44:58.713587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:53.962 request: 00:44:53.962 { 00:44:53.962 "name": "nvme0", 00:44:53.962 "trtype": "tcp", 00:44:53.962 "traddr": "127.0.0.1", 00:44:53.962 "adrfam": "ipv4", 00:44:53.962 "trsvcid": "4420", 00:44:53.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:53.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:53.962 "prchk_reftag": false, 00:44:53.962 "prchk_guard": false, 00:44:53.962 "hdgst": false, 00:44:53.962 "ddgst": false, 00:44:53.962 "psk": ":spdk-test:key1", 00:44:53.962 "allow_unrecognized_csi": false, 00:44:53.962 "method": "bdev_nvme_attach_controller", 00:44:53.962 "req_id": 1 00:44:53.962 } 00:44:53.962 Got JSON-RPC error response 00:44:53.962 response: 00:44:53.962 { 00:44:53.962 "code": -5, 00:44:53.962 "message": "Input/output error" 00:44:53.962 } 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@33 -- # sn=20675118 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 20675118 00:44:53.962 1 links removed 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@33 -- # sn=654789030 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 654789030 00:44:53.962 1 links removed 00:44:53.962 09:44:58 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3228659 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3228659 ']' 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3228659 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:53.962 09:44:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:53.963 09:44:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228659 00:44:53.963 09:44:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:53.963 09:44:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:53.963 09:44:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228659' 00:44:53.963 killing process with pid 3228659 00:44:53.963 09:44:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 3228659 00:44:53.963 Received shutdown signal, test time was about 1.000000 seconds 00:44:53.963 00:44:53.963 Latency(us) 00:44:53.963 [2024-11-17T08:44:58.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:53.963 [2024-11-17T08:44:58.976Z] =================================================================================================================== 00:44:53.963 [2024-11-17T08:44:58.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:53.963 09:44:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 3228659 00:44:54.898 09:44:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3228413 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3228413 ']' 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3228413 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228413 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228413' 00:44:54.898 killing process with pid 3228413 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 3228413 00:44:54.898 09:44:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 3228413 00:44:57.428 00:44:57.428 real 0m9.789s 00:44:57.428 user 0m16.847s 00:44:57.428 sys 0m1.902s 00:44:57.428 09:45:02 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:57.428 09:45:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:57.428 ************************************ 00:44:57.428 END TEST keyring_linux 00:44:57.428 ************************************ 00:44:57.428 09:45:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:57.428 09:45:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:57.428 09:45:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:57.428 09:45:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:57.428 09:45:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:57.428 09:45:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:57.428 09:45:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:57.428 09:45:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:57.428 09:45:02 -- common/autotest_common.sh@10 -- # set +x 00:44:57.428 09:45:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:57.428 09:45:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:57.428 09:45:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:57.428 09:45:02 -- common/autotest_common.sh@10 -- # set +x 00:44:59.330 INFO: APP EXITING 00:44:59.330 INFO: killing all VMs 00:44:59.330 INFO: killing vhost app 00:44:59.330 INFO: EXIT DONE 00:45:00.266 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:00.266 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:00.266 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:00.266 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:00.266 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:00.266 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:00.266 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:00.266 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:00.266 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:00.266 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:00.266 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:00.266 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:00.266 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:00.266 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:00.524 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:00.524 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:00.524 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:01.459 Cleaning 00:45:01.459 Removing: /var/run/dpdk/spdk0/config 00:45:01.459 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:01.717 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:01.717 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:01.717 Removing: /var/run/dpdk/spdk1/config 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:01.717 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:01.717 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:01.717 Removing: /var/run/dpdk/spdk2/config 00:45:01.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:01.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:01.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:01.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:01.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:01.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:01.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:01.718 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:01.718 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:01.718 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:01.718 Removing: /var/run/dpdk/spdk3/config 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:01.718 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:01.718 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:01.718 Removing: /var/run/dpdk/spdk4/config 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:01.718 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:01.718 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:01.718 Removing: /dev/shm/bdev_svc_trace.1 00:45:01.718 Removing: /dev/shm/nvmf_trace.0 00:45:01.718 Removing: /dev/shm/spdk_tgt_trace.pid2815802 00:45:01.718 Removing: /var/run/dpdk/spdk0 00:45:01.718 Removing: /var/run/dpdk/spdk1 00:45:01.718 Removing: /var/run/dpdk/spdk2 00:45:01.718 Removing: /var/run/dpdk/spdk3 00:45:01.718 Removing: /var/run/dpdk/spdk4 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2812898 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2814035 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2815802 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2816525 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2817479 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2817906 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2818879 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2819022 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2819619 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2821007 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2822189 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2822786 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2823383 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2823984 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2824465 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2824742 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2824913 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2825215 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2825550 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2828322 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2828875 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2829426 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2829568 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2830802 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2830950 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2832300 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2832439 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2832877 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2833015 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2833548 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2833703 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2835260 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2835522 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2835797 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2838374 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2841270 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2848417 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2848944 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2851612 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2851889 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2854811 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2858807 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2861131 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2868827 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2874737 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2876067 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2876869 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2887917 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2890530 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2948301 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2951732 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2955963 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2962711 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2992252 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2995433 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2996615 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2998071 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2998358 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2998633 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2998905 00:45:01.718 Removing: /var/run/dpdk/spdk_pid2999812 00:45:01.718 Removing: /var/run/dpdk/spdk_pid3001318 00:45:01.718 Removing: /var/run/dpdk/spdk_pid3002595 00:45:01.718 Removing: /var/run/dpdk/spdk_pid3003292 00:45:01.718 Removing: /var/run/dpdk/spdk_pid3005292 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3005993 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3006831 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3009499 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3013912 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3013913 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3013914 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3016279 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3018735 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3022275 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3046401 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3049426 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3053334 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3054925 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3056557 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3058046 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3061196 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3064194 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3066833 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3071586 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3071596 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3075367 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3075510 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3075650 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3076036 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3076057 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3077263 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3078553 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3079736 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3080912 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3082153 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3083391 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3087456 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3087790 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3089185 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3090044 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3094027 00:45:01.976 Removing: /var/run/dpdk/spdk_pid3096133 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3100065 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3104262 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3111049 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3115779 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3115782 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3128801 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3129468 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3130127 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3130796 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3131773 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3132430 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3133222 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3134160 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3137034 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3137315 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3141363 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3141587 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3145172 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3147931 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3155098 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3155504 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3158146 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3158416 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3161305 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3165269 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3168180 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3175337 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3180805 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3182231 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3183024 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3193985 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3196508 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3198640 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3204696 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3204799 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3207859 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3209384 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3211027 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3211892 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3213534 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3214426 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3220094 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3220484 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3220881 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3222765 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3223043 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3223439 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3225890 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3226033 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3227666 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3228413 00:45:01.977 Removing: /var/run/dpdk/spdk_pid3228659 00:45:01.977 Clean 00:45:01.977 09:45:06 -- common/autotest_common.sh@1453 -- # return 0 00:45:01.977 09:45:06 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:01.977 09:45:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:01.977 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:45:01.977 09:45:06 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:01.977 09:45:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:01.977 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:45:02.235 09:45:07 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:02.235 09:45:07 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:02.235 09:45:07 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:02.235 09:45:07 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:02.235 09:45:07 -- spdk/autotest.sh@398 -- # hostname 00:45:02.235 09:45:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:02.235 geninfo: WARNING: invalid characters removed from testname! 00:45:34.322 09:45:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:35.260 09:45:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:38.551 09:45:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:41.088 09:45:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:43.625 09:45:48 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:46.919 09:45:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:49.456 09:45:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:49.456 09:45:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:49.456 09:45:54 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:49.456 09:45:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:49.456 09:45:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:49.456 09:45:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:49.456 + [[ -n 2741510 ]] 00:45:49.456 + sudo kill 2741510 00:45:49.466 [Pipeline] } 00:45:49.484 [Pipeline] // stage 00:45:49.489 [Pipeline] } 00:45:49.506 [Pipeline] // timeout 00:45:49.511 [Pipeline] } 00:45:49.527 [Pipeline] // catchError 00:45:49.533 [Pipeline] } 00:45:49.549 [Pipeline] // wrap 00:45:49.556 [Pipeline] } 00:45:49.571 [Pipeline] // catchError 00:45:49.581 [Pipeline] stage 00:45:49.583 [Pipeline] { (Epilogue) 00:45:49.598 [Pipeline] catchError 00:45:49.601 [Pipeline] { 00:45:49.615 [Pipeline] echo 00:45:49.618 Cleanup processes 00:45:49.625 [Pipeline] sh 00:45:49.914 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:49.914 3242792 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:49.930 [Pipeline] sh 00:45:50.218 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:50.218 ++ grep -v 'sudo pgrep' 00:45:50.218 ++ awk '{print $1}' 00:45:50.218 + sudo kill -9 00:45:50.218 + true 00:45:50.230 [Pipeline] sh 00:45:50.513 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:02.768 [Pipeline] sh 00:46:03.053 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:03.053 Artifacts sizes are good 00:46:03.068 [Pipeline] archiveArtifacts 00:46:03.074 Archiving artifacts 00:46:03.212 [Pipeline] sh 00:46:03.497 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:03.511 [Pipeline] cleanWs 00:46:03.520 [WS-CLEANUP] Deleting project workspace... 00:46:03.520 [WS-CLEANUP] Deferred wipeout is used... 00:46:03.527 [WS-CLEANUP] done 00:46:03.529 [Pipeline] } 00:46:03.547 [Pipeline] // catchError 00:46:03.559 [Pipeline] sh 00:46:03.839 + logger -p user.info -t JENKINS-CI 00:46:03.848 [Pipeline] } 00:46:03.859 [Pipeline] // stage 00:46:03.865 [Pipeline] } 00:46:03.878 [Pipeline] // node 00:46:03.883 [Pipeline] End of Pipeline 00:46:03.921 Finished: SUCCESS